This post explains an approach to generating fantasy maps that is really interesting. One part of generating a non-grid based map, is to use create Voronoi polygons around some random number of x,y points on the map. There’s a lot of heavy math involved, but there are libraries for calculating them with Python and even PHP.
I read the following article this morning, and found a lot of useful insight in it about what it takes to be a programmer long-term. And, though I’m not approaching 55, as my 6 year old pointed out when I told him my age two weeks ago I am “almost near 100″… Yes I Still Want To Be Doing This at 56
I particularly identified with was the following paragraph
“The thing I find most important today is that you should never work longer, just smarter. Being older does mean you can’t code 20 hours a day any more, or rather imagine you can code 20 hours a day as it’s not really good coding. Is there a real limit to how many hours a day you can actually be producing a quality application? Probably it does go down over time but as long as you continue to learn how to code smarter the end result is still quality, just with less caffeine.”
When I started out of school 15 years ago, it was very easy for me to just sit and bang out code with little preparation or thought put into it. I’d come back, if there was time, and clean up some bit or I’d come back months later and have no clue what I meant to do and kick myself for the decisions and shortcuts I’d take. Nowadays, I’m a lot more reflective when I start something, even if its a simple class. If I can, I bounce ideas off of colleagues, which at the minimum forces me to articulate the pros/cons of approaches I’m considering. I spend less time actually writing code, but have cleaner, easier to use code as a result and usually there’s time re-factor and clean up the rough edges.
This question came up yesterday when Sandy and I presented at DC Web Women, an Introduction to PHP [slides]. I couldn’t come up with a coherent set of arguments at the time, in a way that I could explain easily. These posts do a better job, first a general programming article on the subject:
Implicit coupling — A program with many global variables often has tight couplings between some of those variables, and couplings between variables and functions. Grouping coupled items into cohesive units usually leads to better programs.
From: Global Variables Are Bad
And a PHP specific article full of excellent examples
You may have heard that globals are bad. This is often thrown around as programming gospel by people who don’t completely understand what they’re saying. These people aren’t wrong, they just don’t often program what they preach. I’ve lost track of the number of times I’ve had the “globals are bad” conversation with someone (and been in agreement) only to find their code is littered with statics and singletons. These people are confusing globals (as in the $GLOBALS array) and global state.
Adam Culp posted the 3rd article in his Clean Development Series this week, Dirty Code (how to spot/smell it). When you read it, you should keep in mind that he is pointing out practices which correlate with poorly written code not prescribing a list of things to avoid. It’s a good list of things to look for and engendered quite a discussion in our internal Musketeers IRC.
Comments are valuable
Using good names for variables, functions, and methods does make your code self commenting, but often times that is not sufficient. Writing good comments is an art, too many comments get in the way, but a lack of comments is just as bad. Code can be dense to parse where a comment will help you out. They also let you quickly scan through a longer code block, just skimming the comments, to find EXACTLY the bit you need to change/desbug/fix/etc. Of course, the latter you can also get by breaking up large blocks of code into functions.
Comments should not explain what the code does, but should capture the “why” of how you are solving a problem. For example, if you’re looping over something a bad comment is “// loop through results”, a good comment is “// loop through results and extract any image tags”
Using Switch Statements
You definitely should not take this item in his list to mean that “Switch statements are evil.” You could have equally bad code if you use a long block of if/then/elseif statements. If you’re using them within a class, you’re better off using polymorphism, as he suggests, or maybe look at coding to an Interface instead of coding around multiple implementations.
Other code smells
In reviewing the article, I thought of other smells that indicate bad code. Some are minor, but if frequent, you know you’re dealing with someone who knows little more than to copy-and-paste code from the Interwebs. These include:
- Error suppression with @. There are very, very, very few cases where its ok to suppress an error instead of handling the error or preventing it in the first place.
- Using globals directly. Anything in $_GET, $_POST, $_REQUEST, $_COOKIE should be filtered and validated before you use it. ‘Nuff said
- Deep class hierarchy. A deep class hierarchy likely means you should be using composition instead of inheritance to change class behaviors.
- Lack of Prepared DB Statements. Building SQL queries as strings instead of using PDO or the mysqli extensions’ prepared statements can open up sql injection vulnerabilities.
- Antiquated PHP Practices. A catch all for things we all did nearly a decade ago, includes depending on register globals being on, using “or die()” to catch errors, using the mysql_* functions. PHP has evolved, there’s no reason for you not to evolve with it.
That’s generally what I look for when evaluating code quality. What are some things I missed?
I needed to automate copying files for a website that I was building. Since this site was hosted on an inexpensive shared hosting plan, I didn’t have the luxury of shell or rsync access to automate copying files from my local development environment to the host. The only option was FTP, and after wasting too much time manually tracking down which files I needed to update, I knew I needed an automated solution. Some googling led me to
lftp, a command-line and scriptable FTP client. It should be available via your distribution’s repository. Once installed, you can use a script like the one below to automatically copy files.
# login credentials
# FTP files to remote host
lftp -c “open $HOST
user $USER $PASS
mirror -X img/* -X .htaccess -X error_log –only-newer –reverse –delete –verbose $SRC $DEST
The script does the following:
- Copy files from the local ./web directory to the remote ~/www directory.
- Uses $HOST, $USER, $PASS to login, so make sure your script is readable, writeable, and executable only by you and trusted users.
- the lftp command connects and copies the files. The -c switch specifies the commands to issue, one per line. The magic happens with the mirror command which will copy the files. Since we added the –only-newer and –reverse switches, this will upload only files which have changed.
- You could be a little safer and remove the –delete switch, which will remove files from the destination which are not on your local machine.
- You can use the -X to give it glob patterns to ignore. In this case, it won’t touch the img/ directory or the .htacess file.
If you’re still moving files over FTP manually, even with a good GUI, it’ll be worth your time to automate it and make it a quicker, less error-prone process.
An honest write up with first hand details of the shortcomings of couchdb in production. There’s a reason to stick with proven technologies and not simply chasing the latest shiny. Not saying sauce labs did that, just sayin’.
This post describes our experience using CouchDB, and where we ran into trouble. I’ll also talk about how this experience has affected our outlook on NoSQL overall, and how we designed our MySQL setup based on our familiarity with the positive tradeoffs that came with using a NoSQL database.
Carl Erickson observes that a small, boutique team of developers can be massively more productive than a larger team.
To complete projects of 100,000 equivalent source lines of code (a measure of the size of the project) they found the large teams took 8.92 months, and the small teams took 9.12 months. In other words, the large teams just barely (by a week or so) beat the small teams in finishing the project!
Its immediately reassuring to see those numbers, since I’ve been on enough projects that, once they start falling behind, the temptation to throw more programmers at it grows. Project managers see it as a resource scarcity problem (not enough programmers) and don’t realize that coordination and communication burden that they adding by bringing more people on to a project. Now you have a new group of programmers that need to be brought up to speed, learn the codebase, and accept design decisions that have already been made. You’re lead programmers won’t have as much time to actually program, since they’ll be helping bring everyone else up to speed. Developers have known about this for years, Fred Brooks wrote the book in it – The Mythical Man-Month.
But while the study’s conclusion is reassuring, I wonder if there are other factors at work. Theres an obvious selection bias in the type of people who go to work at a large IT programming department/shop versus those who choose to work solo or in smaller teams. Are large teams filled with junior 9-5 programmers who just want a steady job but punch out in the evening? Do smaller teams attract more experienced and productive people who prefer to work smarter rather than harder? From the study summary, it doesn’t look like they considered this aspect.
Matthew at DogStar describes his PM toolbox today, The Project Management Tool Box | Opensource, Nonprofits, and Web 2.0. It’s a detailed and well organized list, and I think reflects a very practical approach. The first thing that strikes me, is the overwhelming amount of tools available to the would-be PM. Certainly, there is no lack of tools out there.
You see, the general feeling is, there is no silver bullet. There is no grail of a tool that does everything a single Web Producer, Project Manager, Product Manager, or Content Manager might need or want. There is clearly a gap that is filled with a series of different products. This walked hand in hand with a desire to review processes at work and engage in course corrections. It is an excellent habit to follow – look what you are doing with a critical eye, analyse why you are doing it, and make changes as needed. I have worked across four different shops with a wide variety of different ways of practicing project management. I have used these methodologies and tools across ~ 50 different Drupal projects and another 25 or so custom PHP MySQL projects.
I could not agree more that its important to not be seduced into picking the one right tool for every situation. It is a difficult tempation to resists, especially when you have tool makers pushing to sell you a solution. The best tool for the job isn’t the one that has the most features, its the one that you, and your team, end up using the most.
As I read the article, a thought that struck me is that sometimes, you don’t need ONE tool, you just need to make sure everyone has the right tools (and skills) to be productive and responsible. At work, we’re a tiny team of 3 who deal with day to day management of our Drupal site, unexpected requests on tight deadlines, and long term projects to build new features. Here’s a secret – we don’t have a central bug/ticket tracking tool. We can be productive simply with email, IM, code bearing, and face to face conversations. For big projects we use a whiteboard to wireframe, capture tasks, and track progress. This works better than a more sophisticated technical solution that would impose a greater burder on our time.
What’s your experience with tools and grappling with finding the perfect tool?
Seems to be the next big thing in software processes land. So, hire competent peeople and try to get out of the way.
These things are all the basics you pick up by reading Learn How Not to be a Complete Failure at Software Development in 24 Hours. None of it will make your developers any less prone to do stupid shit, and none of it will prevent your systems administrators from roadblocking developers just for funsies.
Jow Maller outlines a straightforward system for using git to manage code from development copies and branches through production. The fact that deployment to live is automated, but I’d be worried about broken or unreviewed code getting deployed unintentionally. I think the best way to prevent that is to have live be its own branch, and then pushing changes to the live branch once they’ve been reviewed, tested, and blessed.
While his approach doesn’t require moving or redeploying the live site, I don’t think that works when you’re using Drupal. You’ll want to track drupal core, sites/all, and your local site’s folders in different branches, per this setup.
The key idea in this system is that the web site exists on the server as a pair of repositories; a bare repository alongside a conventional repository containing the live site. Two simple Git hooks link the pair, automatically pushing and pulling changes between them.