Saturday, February 28, 2015

You'll pay massive interest on your technical debt - careful what you borrow from the future.

When you are programming, you incur a technical debt each time you chose to defer a task that is not absolutely required in order to get a working application.

The task probably still needs to be done at some point in the future, but you have saved time and effort in the short term by deferring it. Just as with money, technical debt requires you to pay in the future for the benefit you have taken in the present.

When you do come back later and see that task waiting to be done, it's going to cost much, much more time and effort to complete, compared to the time it would have taken you to do it in the first place.

When you first saw the task, you had your head in the code.  You had the context in your head of how it all fitted together, you had recently been thinking about the syntax, libraries and methods that this section of code uses.  You had some sense of how risky that code would be to build, and how it interacted with other parts of the system. You had the task context in your foreground thinking.

Whereas implementing the task in the first place might have taken you an hour, completing it as a technical debt task will require you to re-learn all that context.  It's going to take you ALOT more time to get all that context back into place and there's always a chance you won't be able to fully regain that context, or you'll miss something.  Chances are the price of your technical debt will be many times that of the initial time and effort required to complete that task.

Choose wisely which tasks you push into your technical debt, the interest rates can be staggering.

Sunday, February 22, 2015

How to get a job in programming: fix 200 bugs on a well known open source project.

This is a question that I am asked pretty much every day, now you have the answer.

End of lesson.

Thursday, February 12, 2015

Might it make sense to return from cloud hosted to self hosted servers?

Amazon EC2 was launched in 2006 and it was incredibly obvious what a good idea cloud computing was.

Up until then, to have a host computer on the Internet you needed to lug some big clunky box down to an Internet hosting data centre, where they ripped you off because there was so little competition.

Your big clunky box had one, two or more spinning fans in it. If (when) a spinning fan stopped, your machine died - you were in trouble.

Your big clunky box had a spinning hard disk in it. If (when) that stopped, your machine died - you were in trouble.

Your big clunky box power supply had to have huge capacity because CPUs chewed so much power. If (when) that stopped, your machine died - you were in trouble.

Your big clunky box was physically large - at least one full rack unit if not more would be taken up with what in todays terms is something pretty underpowered.

And when you were in trouble, you were in big trouble. You had to slouch off down to the data centre with your toolkit and spare machine and spend hours making the damn thing work again.

So when Amazon EC2 turned up it was screamingly obvious that cloud computing was a killer idea because it was just so much better than dedicated hosting in every possible way. Cloud computing as defined by EC2 was clearly one of the best ideas ever in technology. Hardware as software woo hoo.

BUT it's not 2006 anymore. Hardware is shrinking to the point of disappearing. Computers don't necessarily need CPU fans or power supply fans or 500watt power supplies or tower cases or even spinning hard disks. You can probably run a server in a data centre and have the reasonable expectation that it WON'T break any time soon, as opposed to 2006 in which you had the reasonable expectation that it WOULD break soon.

So I'm wondering, MAYBE, since 2006, the dedicated hosting data centre it has started to make more sense.

Maybe tiny cheap, highly reliable computers can be installed into your local data centre at very low cost and you can break free of the shackles, lock in and high prices (compared to owning your own hardware).

Maybe the world is different in 2015 and it might even be a good idea to start running your own computers again.

Just a thought. I think I'm go to go find the price lists for dedicated hosting at the local data centre.
The trends of server class computers becoming small and more reliable will continue into the future. The obvious issues of 2006 just aren't such big issues any more. Maybe it's time to break up the cloud and bring the servers back to homes, offices and data centres. That way at least you can see when the NSA is plugging in their USB monitoring devices, and offer their technicians a cup of tea while they work.

Wednesday, February 11, 2015

Structuring applications - Python SQLAlchemy

One of the things that interests me most is how to properly structure the source code of an application.

At the moment I am doing some stuff with SQLAlchemy so here is what I have found about how to correctly structure SQLAlchemy code over multiple modules/python files.

This from StackOverflow talks to the issue:

http://stackoverflow.com/questions/7478403/sqlalchemy-classes-across-files/7479122#7479122

This from Michael Bayer, author of SQLAlchemy:

https://groups.google.com/forum/#!topic/sqlalchemy/BtPac9O3ggI