Brew is Failing with Crazy Error

If you're on Mavericks and you're using Homebrew, you may have experienced a weird error message.

/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/coreext/kernelrequire.rb:45:in `require': /usr/local/Library/Homebrew/download_strategy.rb:88: invalid multibyte escape: /^\037\213/ (SyntaxError)

This seems to be caused by an update to Ruby version 2.0 as part of Mavericks. All you need to do is make sure that Homebrew points specifically to the 1.8 version of Ruby.

Edit /usr/local/bin/brew

Change the first line from

#!/usr/bin/ruby

to

 #!/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby

Run a brew update afterwards. If brew update works, you're good to go. If it fails, you might need to go nuclear on this bitch. Understand this will probably trash any custom formulas you have.

cd /usr/local
git reset --hard origin/master
git clean -df

Hopefully that solves your problem.

Postgres, Django and that Damn Clang Error

I'm migrating to PostGres for one of my Django projects. (From MySQL) I'm writing this more as a note for myself, but if someone else finds it useful, go for it. If you've done this on a Mac, you may have seen the following errors.

Error: pg_config executable not found.


If you've found that error, then you may not have Postgres installed. If you do have Postgres installed then make sure the installs bin directory is in your path. If you don't have Postgress installed, the easiest way is Postgres.app. After installing Postgres drop to a terminal and add a new value to your PATH.

export PATH=$PATH:/Applications/Postgres.app/Contents/Versions/9.3/bin
pip install psycopg2

Don't feel discouraged when this fails again. Because it probably will. The message is extremely helpful if you've got your Ph.D.

clang: error: unknown argument: '-mno-fused-madd' [-Wunused-command-line-argument-hard-error-in-future]
clang: note: this will be a hard error (cannot be downgraded to a warning) in the future
error: command 'cc' failed with exit status 1


It may not be downgraded in the future, but today is not tomorrow. So lets hack this bad boy.

Things you need:

  1. If you don't have it already, download Xcode and install it.

1a. Drop to a terminal and install the command line tools with

xcode-select --install

  1. If you already had Xcode installed and it's version 4.2 or earlier, skip ahead to step X. If you downloaded Xcode in step 2, you'll need to install some additional compiler tools that were removed from Xcode. The best way is to use Homebrew. (You are using Homebrew RIGHT?)

    brew install apple-gcc42

  2. Once that's complete. If this works, you're done. If not (and it probably won't) move on to step4.

    pip install psycopg2

  3. Set an environment flag to skip the BS compiler flags being used.

    export ARCHFLAGS=-Wno-error=unused-command-line-argument-hard-error-in-future
    pip install psycopg2


With any luck, that will result in a successful install.

IT and the Empathy Deficit

This post is REALLY late, but I think the topic is still relevant, even if the trigger events have faded in our memory

The Information Technology field is completely devoid of any ability at self-reflection. The whole damn thing, from companies to board of directors, to developers, to system admins. How easily and quickly we can wag our finger when someone else fails, yet when we ourselves fall down, there’s a “perfectly logical explanation”.

In case you were under a rock on last Friday, many of Google’s services went down for an extended outage. I know for our fast paced world of hyper-connectivity, 25 minutes without email or documents is the end of the world. There’s the entrepreneur who finally got his chance to pitch in front of a venture capital firm, but couldn’t get to his presentation. The college kid that was trying to print his assignment before making a mad dash to beat the deadline. I get it, these services impact our lives in major ways.

But it’s alarming to see how the people who should understand most, are the first to pile on. Yahoo just couldn’t help themselves and tweeted about the issue multiple times. They have since apologized but honestly,at this point who cares.

But as the Twitterverse collectively freaked out everyone in my office was calm as a cucumber. Sure we couldn’t access email, but we knew Google would fix the problem and be back up as soon as possible. How did we know?

Because it’s what we would do.

News flash. Sometimes people make mistakes. Sometimes process fails. Sometimes gaps we didn’t know about are found. Sometimes test cases are missed. As a developer, tester or system admin, have you never made a mistake? Have you never let a bug slip in to production? Have you never under-estimated the impact of a change? If you’re perfect, then this message isn’t for you. But if you’re like the other 99.999% (see what I did there?) of people in our field, I’m sure we can agree on a few things.

  • Google’s uptime is pretty damn good.
  • Google is run by some pretty smart people.
  • Even smart people can be fallible.
  • Downtime is a human tragedy. We should treat it with respect.

That last one sounds crazy, but seriously. For someone on that Site Reliability Team, the outage wasn’t a laughing matter. It probably doesn’t feel good to know that the Internet is collectively dismayed and disgusted by a mistake you made, even though 50% of people wouldn’t understand the mistake if you explained it to them. Instead of ridicule, we should encourage open dialogue about how mistakes like this are made, so everyone, not just Google can learn from them.

Outages are learning opportunities for everyone. Why did it happen? Was it a tools failure? I’m sure others would like to know if it’s a tool they use as well. Was it a process failure? Open dialogue about the failures of traditional IT Operations shops and their failures had a huge hand in forming the DevOps movement. Was it human error? Why did that person think the action they took was the right one? If it made sense to them, it will make sense to someone else, which means you might have a documentation or a training issue.

All of these problems are correctable but only if we feel comfortable talking about our failures. This constant ridicule and cynicism our industry has when someone fails threatens the dialogue necessary.

Google has shared some details about the outage, and I’m happy to say it seems to be a growing trend among companies, but what about at a lower more personal level?

I challenge those in our field.

Be fallible
Be open with your failures
Get to the heart of why the failure happened. Don’t just call it a derp moment and move on.
Recognize when someone is trying to do these things and encourage it.

Taking Back My Wasted Time

I was on the train today and the worst possible thing on the planet that could happen to me happened. My phone died. Normally I have a contingency plan for that, but all of them fell through and I was forced to ride the train in total silence, with nothing but my imagination to pass the time. This is the perfect example of wasted time. And despite all the iPhone’s ways of keeping me connected to the world, the single greatest accomplishment of the mobile era, is helping me reclaim those wasted moments in my life.

I’m a productivity nut. But how I categorize productivity might be different than the usual definition. Things I categorize as being productive that might be surprising are:

  • Reading books
  • Watching television
  • Playing video games

Why are these productive? Because living a good life also requires having some fun. And in this new, always on world, it can be easy to lose site of that. So in order for me to live my life balanced, I actually have to put these fun things in my to-do app because if I don’t, I may not make the time for them. So watching 30 minutes of the Daily Show during a train ride is a huge win for me. I’ve now made that wasted time, productive and without the added guilt of thinking “there are 40 other things I could be doing right now.”

In the same light, there are some things that I categorize as unproductive yet they still need to be done for various reasons.

  • Cleaning the house
  • Grocery shopping
  • Cooking dinner

These are things that HAVE to be done in everyone’s life. But they’re unproductive to me because

  1. I can find other ways to get the same result. (Take out?)
  2. The end result doesn’t phase or impact me in a really meaningful way. (So there’s dishes in the sink. I don’t mind, I’ll wash the dish I need when I need it)

Fortunately I have a wife that very much cares about these things. She saves me from my own slothfulness. But I make these tasks bearable by combining them with mobile devices to also make them productive. I’ll wash dishes, cook dinner and grocery shop all in the same day if I can also listen to my podcasts or audiobooks. Because those activities are deemed productive and therefore have saved me from what would be wasted time. (Although that time is greatly appreciated by the Mrs.)

I try my best to guard against this wasted time. It’s so bad that I carry 3 devices just to make sure it doesn’t happen.

  • iPhone 5 - My goto device for reclaiming wasted time. Works in just about any scenario. Sometimes excessive use prevents you from having all the juice you need when you need it. So in case my iPhone dies, I have as a backup ….
  • iPad 3rd Generation - Next best thing to the iPhone. I don’t have the 4G model, but if my phone has enough juice I can tether. This allows me to do some work if I need to or just catch up on some digital magazines (Wired, Newsweek, Time) or read my RSS Feed at a more comfortable scale. But the 3rd Gen iPad is heavy (First world problem) and can be a pain to use if I don’t have a seat on the train. Not to mention the frigid Chicago Winters can make operating the touch screen a challenge on some mornings. For those days I have ….
  • Amazon Kindle E-Reader (2nd Gen) - The Kindle is a great lightweight device. Battery life is great and with buttons I can operate it with gloves on in the cold. It’s also incredibly easy to use one handed.

Despite these gadgets, the perfect storm occurred today and I was forced to stand in silence, watching others be productive. I mentally created a mind map for this post, which made me feel a little better, (Especially since I’m actually writing/posting it) but all-in-all it was a frustrating experience.

Good or bad, the mobile revolution is pushing us to be busy bodies both in our work and our leisure. The conversations about mobile are always around being connected or disconnected from the world around. For me, it’s just about getting shit done.

Parsing the Contents of Mailq

Every so often you come across that task that you think is going to be insanely easy. But alas, you roll up your sleeves, get into the nitty-gritty and discover a twisted twisted web of minor problems. That's been my experience with the Postfix mailq.

I wrote a mailq log parser in Python. The mail queue is where Postfix and Sendmail dump emails that can't be sent or processed for various reasons. It's a running log of entries that can be dumped out in plaintext via the mailq command. I thought this task would be simple, but a few hurdles I ran into.

1. The log file is a variable line length.

A record could span multiple lines. That means you can't simply iterate through the file line by line. You need to figure out where a record beings and ends. So far from my testing it appears the line length is based on the reason the mail is in the queue. Which leads me to item #2.

2. Figuring out why the record is in the queue

This was trivial, but it was an odd design choice. An item could be in the queue for various reasons. There is a special character at the end of the queue ID that tells you the reason it's in the queue. * means one thing, ? means another and the lack of a special character means yet another. Once you've parsed the queue id out, it's trivial to check the last character, but why not just make it a separate field?

3. Different versions of Postfix have slightly different outputs

As soon as I ran a test in our QA environment I learned that different Postfix versions have slight modifications to the output of the mailq command. The annoying part is that the updates aren't substantive at all, but just change for change sake as far as I can tell. Now the email address is blanketed by <> characters. The count of the number of items in the queue are in the beginning of the file instead of the end. And the text describing that number changes its wording just a tad. "Total Requests" instead of "Requests in Queue". Not very useful.

4. The date doesn't include a year

I mean...really? And the date isn't formatted in a MM-DD-YYYY format. It's more like "Fri Jan 3 17:30". So now you're converting text in order to find the appropriate date. This post is timely too because the beginning of the New Year is where this is really a pain. The fix I'm using so far is to assume it's the current year and then test the derived date to see if it's in the future. If it is, fall back to the previous year. This assumes you're processing the queue regularly. It's an auditor's nightmare.

None of it was terribly difficult, but more more difficult than it needed to be. It's as if they wrote the mail queue to only be parsed by humans. I'll be working on my MailQReader for a little bit because I have a need at work.

Is DevOps a Position?

I’ve heard a lot about the term DevOps. Mostly from employers or from technical recruiters looking to fill these roles. When I hear people talk about DevOps, they’re largely talking about Chef, Puppet , CFEngine or just more general configuration management. While I believe the whole Infrastructure as Code movement is extremely helpful, that’s not the end-all be-all of DevOps.

The more I learn about the DevOps movement, the more I realize it’s already falling victim to the bandwagon types in our industry. People are scrambling to build DevOps teams in addition to their development and operations teams, which defeats the purpose of DevOps entirely. DevOps is not a position, but a mindset. It’s an organizational structure. It’s a methodology. The idea of taking developers and operations staff and destroying the barriers and silos that exist between them is key to the DevOps movement.

People are finding different ways of doing this, but I think one of the best solutions might be embedding operations staff members into stream teams. This will go a long way to ensuring that the needs of the Operations staff are being considered during the development cycle and maximizing collaboration across the organization. It also pleases the separation of duties concerns of auditors, regulators and general compliance wonks.

Building a new DevOps team doesn’t work for a few reasons. First, it simply replicates the issues that currently exist, which is silos of staff members who often have disconnected goals and incentives. The goal of Operations is to keep a stable system. What better way to stabilize a system then to thwart change? The goal of developers is to deliver new features and functionality to the end user. You can’t do “new” without “change”, so these two goals are instantly at odds with one another.

If you add a DevOps silo to this picture, it will naturally land right between these two tribes. The DevOps team’s existence lends itself to the idea that it’s supposed to bridge the divide, but in reality it will become a dumping ground. Developers don’t need to worry about integration or its impact on the system, because that’s DevOps’ job. Operations doesn’t need to worry about how new deployments will impact production, because that should be vetted out by DevOps in the QA phases. Before long DevOps is reduced to simply serving as a QA Operations team and Release Engineering. This doesn’t solve the problem though. The poison still runs through the veins of the organization. You’ve just taken the antidote and diluted it.

Like a lot of problems in the world, the issue boils down to incentives. Not just monetary incentives, but the actual goals of the team and the company. The silos you build, the more you fragment the overall goals of the organization. Silos lead to blinders on to the goals of others. Imagine a team of 4 people building a car. The goal is to make a car that is “fast and cheap”. Then you split the group into 2 teams and give one group the responsibility of “fast” and the other group the responsibility of “cheap”. You can imagine the outcome.

Whether you agree with my approach to DevOps or not, what is undeniable is that DevOps is not a job title. To implement DevOps in your organization, it does require a very specific skill-set. You need developers who understand systems and the impact design and coding choices have on the servers. You need administrators that understand code at a deeper level than simple if/then/else constructs. But rebranding your people under the DevOps moniker, or worse, creating a new DevOps team is no solution.

Invalid Syntax When Using or Installing S3Cmd Tools

I installed the s3 tools on a new machine that was spun up for a testing environment. I was quickly greeted with an ugly error message.

 utc = datetime.datetime(dateS3toPython(s3timestamp[0:6],tzinfo=pytz.timezone(‘UTC’))  ^
SyntaxError: invalid syntax

Not a very helpful error if you're new to Python. I'm not 100% sure what the problem is here, but it appears that Python version 2.5.2 (and older) have a problem with list expansion combined with a keyword argument.

Here's some quick dummy code I put together to illustrate.

Python 2.5.2

def test(args, **kwargs):
print "Received Args"
print "Received Kwargs"

>>> arg=(1,2,3,4)

test(arg, test='bar')
File "", line 1
test(
arg, test='bar')
^
SyntaxError: invalid syntax
test(1,2,3,4,test='bar')
Received Args
Received Kwargs

But running this same code on Python 2.6.8 (which is in the Redhat Repos) doesn't produce the problem at all.

So the easy fix is to upgrade your version of Python. I've reported the bug to the S3cmd team to address. My guess is they'll just require a newer version of Python. Their current version test only looks for 2.4 or better. Which is probably out date.

Protect Production Environments from Test Environments with IPTables

Thanks to the flexibility of virtual machines, you've probably found yourself with a clone of a production machine being deployed to a test environment. There are a variety of reasons to do this. Maybe you're preparing for an application upgrade, tracking down a particularly nasty bug or building a clone of your production environment for QA.

The fear is always "How do I prevent the clone from acting on production?" It's a very real fear, because it's easy to miss a configuration file. In an ideal scenario you'd have the test environment on a different network segment that has no connectivity to the production environment. But if you're not that lucky, then there's iptables to the rescue!

With iptables you can use the below command to prevent your test host from connecting to production.

iptables -A OUTPUT -m state --state NEW -d  -j DROP

This command will prevent any new connections from being initiated FROM your test server to the server specified by . This is handy because it still allows you to make a connection FROM the banned production box to the target test box. So when you need to copy those extra config files you forgot about, it won't be a problem. But when you fire up the application in the test environment and it tries to connect to prod, iptables will put the kibosh on it.

If you're super paranoid, you can get execute

iptables -A OUTPUT -d  -j DROP

This will prevent any outbound packets at all from going to .

Don't forget that the iptables command doesn't persist by default, so a reboot will clear added entries. To save the entries and have them persist, execute:

service iptables save

Good luck!

mysqldump Command Not Working in Shell Scripts

I was working on a quick and dirty mysql backup script today. It was nothing complex, yet for some reason I could not for the life of me get it to execute. Here's the script in its entirety, minus the obvious redactions.


!/bin/bash

DB=${1}
USER="USER"
PASS="PASSWORD"
BACKUP_DIR="/tmp/sqlbackups"
CMD="/usr/bin/mysqldump"
DATE=`date +%Y%m%d`
BACKUP_COMMAND="${CMD} -u${USER} -p'${PASS}' ${DB}"

echo "`date` -- Backing up database ${DB}"

${BACKUP_COMMAND} > ${BACKUP_DIR}/${DATE}.${DB}.sqldmp
rc=$?

if [ $rc -eq 0 ];
then
echo "`date` -- Zipping up DB"
/bin/gzip ${BACKUP_DIR}/${DATE}.${DB}.sqldmp

echo "`date` -- Deleting old backups"
/bin/find ${BACKUP_DIR} -name "*${DB}.sqldmp.gz" -mtime +5 -exec rm {} \;
else
echo "`date` -- Backup Failed! Aborting"
fi

I continuously received an "Access Denied" error message for the user. Just so I knew I wasn't crazy, I echo'd the command that was being executed, copied and pasted it and voila. Backups. WUT!?!

The current password was pretty long and contained spaces, so I figured, maybe the spaces were causing problems. I created a brand new user on sql.


grant select, lock tables on 'databasename'.* TO 'databaseuser'@'localhost' IDENTIFIED BY 'password with no spaces'

Same results. Works when I copy and paste the command, but doesn't execute through the script.

So to debug the thing, I removed the -p from my command line so that I'd be prompted for a password. DISCO! It worked. WUT!?!?

Now that I switched the password to be a password with no spaces, I decided for shits and giggles to remove the single quotes. Suddenly I'm in backup heaven.

I don't claim to be an expert on the evils of string variables in bash, but my understanding was that quoting the string, inside of a double quote would produce a literal single quote. Based on the output when I echo'd the command line variable, that's EXACTLY what was happening. But for some reason mysqldump just didn't care for that.

Odd, but solved. I saw a bunch of people reporting the same problem on the web with no answers, so I thought I'd post my experiences.

Pandas and Django

I've been playing around with the pandas library recently. For those that don't know, pandas is a powerful data analysis library written in Python. I've never used R before, but from what I hear, Pandas is heavily influenced by R.

My goal is to get some of my data from a Django project into Pandas so I can leverage some of its awesome features. Pandas has a few options for getting data converted to a data frame, one of pandas data types. Pandas can read tabular data, csv files and even accept a RAW SQL query.

Considering I've already got the data I want in my database, the RAW SQL query seems the way to go. There are a few projects out there to help include support for your pandas in your Django project. I haven't used it yet but the django-pandas project looks interesting.

For my purposes though, I just needed something quick and dirty. Writing SQL queries by hand seemed silly considering all of these pretty models I have laying around, why not use them? So here's how I leverage the Django ORM to make life a little bit easier.

First I'll need to update the necessary libraries

import MySQLdb
import pandas.io.sql as sql
from football.models import FantasyPoints

MySQLdb is the connector library I'm using for my database. pandas.io.sql is a parser for converting query results to the pandas data frame data type. The last line is the model from my project.

records = FantasyPoints.objects.filter(statcardseasonyear=2012)
db = MySQLdb.connect('','',' ','')

First I'm going to grab a query set object from the database. We'll use the query set object to produce the SQL for us. Also remember that you're query set isn't evaluated until you iterate it, so we haven't hit the database yet. The second line is setting up our DB connection object.

Now we can use the sql object to fetch our data and convert it to a data frame.

df = sql.read_frame(str(records.query), db)

Our QuerySet object has a property called 'query'. It holds the internals for query construction, but if you wrap it in a str() function call, you'll get the raw SQL. With that, also pass the database connection object and voila, you have a data frame object from your database.

Caveats

There are a few gotchas here though. For starters, the query property is not part of the Public API, so things could change. The documentation says it's safe for pickling, so it's probably not going away, but who knows how it might change. If you're going to use this approach in production code, I HIGHLY recommend you add a layer of abstraction to it. That way if it changes, yo've only got to make your modifications in a single class/function whatever.

Another gotcha is that the str() representation doesn't quote WHERE clauses that filter on strings appropriately. So lets say your query has a filter like
WHERE position = 'RB'

The query property will output the query with no quotes around RB, making the SQL syntax invalid. I haven't spent much time digging into this because I haven't had to filter on a string. With the use of Pandas, my strategy has been to pull filter as little as possible in SQL and do my slicing and dicing in Pandas. (And caching the data frame for later reuse)

With these two big gotchas, your mileage may vary. I happen to do a lot of work in the IPython Notebook, which makes this approach very quick and simple for me.

Oh and speaking of IPython and Django projects, if you are using notebook, you'll want to check out the Django Extensions module. It allows you to launch a notebook with all of your Django Project variables in tact. Any hoot, I've babbled long enough.

Happy Hacking

Why Are There So Many Pythons? From Bytecode to JIT | Toptal

Found a great blog post on the many implementations of Python today. If you only take one thing away from the article, this would be it.

"The first thing to realize is that ‘Python’ is an interface. There’s a specification of what Python should do and how it should behave (as with any interface). And there are multiple implementations (as with any interface)."


How and why tmp_table_size and max_heap_table_size are bounded

tl;dr -- If your query exceeds the lowest of these 2 settings in size, it will write the temp table to disk. These 2 parameters need to be updated together.

How and why tmp_table_size and max_heap_table_size are bounded: ""

(Via .)

Structuring Selenium Test Code

I had the opportunity to use the Selenium browser automation suite for a project at my, now previous, job. Our testing needs weren't anything spectacular, especially considering we had absolutely zero in the way of automated testing for the project I was working on. I figured anything now would just be icing on the cake.

Playing around with Selenium, the very first thing I struggled with was how to structure my test code. I did a bit of research and a bit of my own architecture to produce a solution that worked well for me. Your mileage may vary.

Page Objects

The first major revelation for me was the idea of structuring each web page as an object. Each of these page objects should return another page object during an operation. Sometimes it might be the same page, other times it will be the resulting page of a particular action. This allows you to handle the WebDriver API inside the Page classes and removes the need of handling drivers in your actual test code.

Sample Login Page in Python

class LoginPage(object):

definit(self, driver=None):
self.url = ''
self.driver = driver or webdriver.Chrome()
self.driver.get(self.url)

def typeusername(self, username):
self.driver.find
elementbyid('txtUserName').sendkeys(username)
return self

def typepassword(self, password):
self.driver.findelementbyid('txtPassword').sendkeys(password)
return self

def submitlogin(self):
self.driver.find
elementbyid('btnlogin').click()
return HomePage(self.driver)

def main():
loginpage = LoginPage()
loginpage.typeusername('jeff')
loginpage.type
password('password')
newpage = loginpage.submitlogin()

This quick set of code kind of illustrates the structure of the code. From the client perspective, it masks all of the nasty element searching and driver passing and makes the actual test code (the main routine) pretty clean and legible. Since a lot of testers aren't full blooded programmers, the ease of reading the main routine is probably going to be very appreciated.

Composite Objects and Inheritance

In most situations, a page has several common components. For example, on this blog, every page will have a navigation bar at the top, with menu items. The way I handled this scenario is by making the Navigation bar a composite object of the page. So the navigation is handled by this NavBarComponent. So in my example, I navigate menus by passing a list object that has the name of the navigation tree. So if the menu navigation would be Server->Inventory->Details, I'd pass a list of ['Server', 'Inventory', 'Details'].

class NavBarComponent:

def navigatemenu(self, driver, *args):
for arg in enumerate(args):
hover = None
element = driver.findelementbypartiallinktext(arg[1])
hover = ActionChains(driver).move
toelement(element)
hover.perform()
click = ActionChains(driver).click(element)
click.perform()
page
class = self.getfinalpageobject(arg[1])
return page
class(driver)

The getfinalpageobject method is just a method that I created so that I can dynamically return a Page Object. That's not important however for our discussion here. So now, navigation can be handled by this subclass.

class HomePage(object):

def _init_(self, driver):
self.navbar = NavBarComponent()
self.driver = driver

def navigatemenu(self, *args):
return self.navbar.navigatemenu(self.driver, *args)

Now we have our HomePage object expose the navigatemenu method but simply passes that call to its composite NavBarComponent object. This allowed me to keep my code pretty focused on specific tasks. It also has the added benefit of being able to either update the NavBarComponent independently in the event it's ever changed or we can substitute the NavBarComponent for some specialty Navigation Bar in the event a page behaves differently than others.

I extended this same approach to things like DataGrids. Typically a DataGrid object will be reused from page to page. Sure simple things like column names may change, but just by parameterizing those small changes, you can easily use the Datagrid Object across multiple pages.

Inheritance

Lastly, we use inheritance to eliminate more code. In my project, we had a Server Inventory page, a Storage Inventory Page and a Network Device Inventory page. But all these pages had most of the same components and logic. That allowed me to place all the common actions into an InventoryPage class, which handled the nitty gritty things like, sorting by column name or grouping by column name. These are functional steps that would exist on each page. So by encapsulating all of those items into the InventoryPage base class, I can quickly add functionality to a bunch of different pages at once.

Here's a quick UML diagram of what this might look like.

UML Diagram of InventoryPages

This is by no means the absolute best way to do things. I'm sure there are plenty with more experience in Selenium that have other approaches. But this approach has been easy to maintain, easy to follow and easy to understand. I think that's what most of us are looking for in our Test Suites. Your mileage may vary.

Chicago Nerd Social Club

800

799

Organizing Work as a Solo Programmer

In a team environment, you have things like bug reporting, feature request management,Agile approaches and sprints. If you're lucky you might even have a dedicated Scrum Master and Product Owner. But when it's just you, the idea of these sorts of constructs seems like overkill. Nothing could be further from the truth. Documenting your approach applies a type of structure that will reap huge benefits, not necessarily in the quality of your code, but in the productivity of your coding sessions.

Requirements Tracking

If you're using Github there is a feature of the site to track "Issues" for your project. Issues is probably a bad name for it though, because with a little metadata/label magic, it can track features, enhancements, etc. When I started working on my little side project, I setup a list of "requirements" that I needed for my first release. I have entries like:

  • Users can sort players by Fantasy Points
  • Users can filter by position
  • Users can enter their own stat projections for calculations
  • Users can create their own draft board

These are just a few examples, but you get the idea. Once these requirements start to take form, you'll quickly realize it'll take forever for you to actually be able to release anything. That's where milestones come in. In Github you can use Milestones to group requirements and issues for a particular release. This gives you small victories along your path to awesomeness.

Issue Tracking

As I go through my testing, I inevitably find bugs. But sometimes I'm on a roll in a certain mode of thinking and don't want to have to context switch to deal with this new problem. This is where I use Github's Issues to log anything and everything I need to go back and address. This is nice for a number of reasons.

  • I'll never forget an issue that I need to address
  • I can begin to see large trends in my code based on the type of issues I'm encountering
  • I can collect bugs and deal with them in a "bug release"
  • It builds discrete units of works for commits

That last line is a big one. Prior to my discovery of the usefulness of this type of tracking, my commits were haphazard at best. My commits were largely just ways to save my code changes. You know you've got this problem if you always struggle with your commit messages. If you don't know what to type in a commit, then chances are you're not breaking your work up effectively. With issues and requirements, when to commit becomes automatic. You finish with an issue, you make a commit and push it. You finish a feature request, you commit and push it to the remote repo. Easy as pie.

Planning Your Workday

When your tracking requirements and issues, it makes your coding sessions more productive. If you're like me, you don't program full time, so you spend your nights and weekends coding your brains out. Because of that, you need to make your time as effective and efficient as possible. When you track your requirements, issues, etc in this manor you can simply sit down, look at the outstanding items, pick 2 or 3 and get to work. You'll want to keep in mind the difficulty of the issues your tackling and the time you have for this particular working session. It might be better to tackle a bunch of smaller issues (and get them finished) than half-starting a larger issue. But either approach you take, you'll be ready to tackle the work day.

To some folks these may seem like a no brainer. But for the solo programmer, the things suggested here may sound like overkill. The image of the keyboard cowboy hacking away and addressing issues as they spring up may sound sexy. You might even be delusional enough to think you won't run into issues and therefore don't need issue tracking. But either way, I beg you to try this approach and see if it benefits you. I know it has completely changed the way I work and how much work I get done.

This Ain't Your Daddy's Star Trek

Star Trek Into Darkness is finally here! I'm not going to go into a detailed review of the movie, as plenty of people are discussing the pros and cons of the films. Let it be known that I enjoyed the film immensely, but have accepted the fact that this is not the Star Trek I know and love.

There's nothing wrong with this reboot being different from the original series both in tone and substance. In the first film, director J.J. Abrams skillfully rebooted the series in a way that didn't nullify the adventures from the late 60's and the movies. An alternate timeline offers Abrams the chance to re-imagine the series in his own way without being completely married to Roddenberry's vision.

The problem though is that Abrams doesn't divorce himself from the Enterprise's previous incarnation. The Abrams films attempt to establish themselves as their own, while simultaneously giving massive head nods to the original series. This approach works for films like The Italian Job, but it doesn't work so well for properties with massive followings like Star Trek.

Well, it almost doesn't work. The box office tallies are really the only opinion that the movie industry listens to and without question the reboot has been a smashing success. For people new to Trek, these films are exactly what they needed to dip their feet into the pool of the Federation. But if one were to go back and watch, even the last season of The Next Generation, they would be extremely confused, bored and disappointed.

With the mainstreaming of geek culture, science fiction is becoming quite a lucrative genre. But mainstreaming typically means a sort of homogenization of the content. The majority of science fiction films fail to even attempt to challenge our viewpoint of the world around us, the hallmark of a good sci-fi film. Instead lots of science fiction films devolve into Die Hard with spaceships and lasers. The Star Trek reboot is no different.

The reboot of the series is fantastic. I loved both films and I own the first on Blu-ray. But despite my Trek loyalty, I can't help but think of these films as disposable summer movies. In 15 years, we'll still be talking about Shatner, Nimoy and Kelley. We'll still be doing imitations of iconic scenes and still making parodies of those same scenes. But in 15 years, where will Pine, Quinto and Urban be in our memories? Will the occupy our hearts and minds the way Stewart, Frakes, Brooks and Visitor do?

The Star Trek reboot is a good thing for trekkies. It's putting the property back in the minds of the population. But what comes out may not be the product that we're looking for, but its still a product you can enjoy.

The Perils of Time Shifted Television - Chicago Nerd Social Club

I wrote a post entitled The Perils of Time Shifted Television over at the Chicago Nerd Social Club blog. A little cross promotion here.

Time shifting our television has some side effects that we’re passively aware of, but may never truly factor into our overall experience satisfaction


Time shifting can be a dangerous thing.

Destroying the Work/Life Walls

I was recently allowed the awesome chance to switch my work laptop from a dull Dell Latitude, to a nice shiny new Macbook Pro with Retina display! It has been a radical transformation in my work/personal computing balance. No longer do I need to segregate my activities to fit my active persona. I'm also extremely more productive as my work and personal to-dos become intertwined.

Some people may read that and instantly think "He's making phone calls on company time!" That's true, but the truth is most people are not spending 40 hours a week being productive at work. We take water cooler breaks, check Facebook and Twitter, and occasionally get lost in a sea of YouTube cat videos. But in my case, my lost productivity during the 9-5 shift is more than made up during the off hours when I'm on a call at 11pm or working on a change over the weekend.

Previously I had two separate GTD implementations, one for my personal stuff in Omnifocus and another for work using Microsoft OneNote. The problem though is that I had to decide which mode I was in and open/react accordingly. I had 2 separate inboxes and 2 different sets of motivations for GTD. With a single solution now, I simply blow through my tasks, no matter where I'm at or when I'm at it. Saturday morning, I wake up and my task is to read some documentation for work. Done. Then I write a blogpost, then I finish writing that python script for the upgrade. My life never becomes "work/personal" it's all just stuff that needs to get done. Yes, sometimes that means I'm writing a blogpost at work or making a doctor appointment for my daughter after my change management meeting. But it all balances out in the end. (My output at work is proof of that)

Now that I have this seamless experience across work, home and mobile (all running Omnifocus), it forces me to re-examine how I deal with contexts. I find myself using applications more as a context than simply "Computer". With an array of mobile and desktop apps at my disposal, I don't necessarily need to be at a computer. I may have a context simply titled Mind Node for my brainstorming work. When I have music albums that I need to check out, I now have a context called Spotify. This list will continue to grow as I integrate more applications into the totality of my computing life.

This may sound like the ultimate destruction of work-life balance, but for me it brings it all into a closer harmony. My views on this may differ because I've spent a decade being on-call, which forces the blurring of these lines anyways. But the truth of the matter is, I enjoy being plugged in. Being plugged in helps me keep a pulse on things, maintain Inbox Zero and get my to-dos routed to the right place as they come in. All of these things together, keeps me from being overwhelmed with the amount of work tasks that I need to perform.

The key is to ask yourself, does my job allow me to (reliably) erect firewalls between personal and professional. In most IT fields, it seems like that possibility is eroding, but that doesn't have to be a bad thing! If you're at a director level or above, you likely have no chance of leaving work at the office after 5pm. By tearing down the wall between your work and personal life, you have the potential of actually creating more flexibility, not less. Of course this is all dependent on your job and the type of role you have.

This may not work for everyone, and your mileage may vary, but for me, the breaking down of the work/personal walls have been a tremendous boom.