Thursday, May 25, 2006

 

Are Paul Graham and Joel Spolsky Right?
Or: Should I Start my Own Software Company?

(Taking the Opportunity to Delve in Other Related Matters Along the Way)

Over the last year and a half or so, I've had a growing feeling of uneasiness regarding the work I do as a hired software developer. It was a little bit vague at first, a general itch of dissatisfaction. I initially dismissed it but it kept (and keeps) nagging me. I believe it's a direct result of me reading (too much?) Joel Spolsky and Paul Graham.

These two discuss slightly different matters, but both talk of a "better way" to create software. They have different means to get us there: some ideas intersect, others don't. However, the overall picture is the same: the current prevailing models/methods/tools are defective and should be fixed/replaced. The next step in this reasoning is that if it isn't fixed within their organization, the best programmers should themselves start their own companies. Graham takes that step very clearly, Spolsky less so but that's still the direction one considers naturally after reading his essays. (I'm obviously cutting corners here, I can't do justice to all they've written in a few lines.)

The "better way" mantra appeals to me tremendously as a software developer (and, let's face it, to my ego.) I want to believe there's a better way to do it, because the current one leaves a dry taste in my mouth. I'm not sure they're right, but what if they are?

Spolsky and Graham are both successful, which seems to lend some credibility to their claims. However, this fails to impress me in terms of demonstrating anything. Consider the following bit of healthy scepticism:

Don’t get me wrong. I like both these authors and software developers. I just somewhat doubt their success all comes from their "alleged" source (lisp/better language for Graham, best working conditions/developers for Spolsky). I'm sure they themselves would be the first to admit it.

***

Yet, the current model is wrong. I knew, but recently a few specific points cristallized in my mind.

1. We code in Java (mostly.) (For the sake of this discussion, know I consider it the same as C# and better than C++ .) I don't hate Java but I've grown very weary of its limitations, verbosity (brought to new heights by Generics) and the obstinate desire by Sun to keep it an "easy" language - although designing a Generics-aware library is anything but.

I've become very proficient with it after 7 years of daily usage. I'm sure there are a lot of people out there who are still better than me, but I know this language and the libraries that come with it a lot. (And a lot more than I cared for.) Despite that fact, I can't be as efficient in Java as I'd like. Sure, I spend less time pleasing the compiler and doing monkey work than, say, in C++, but I believe there is still a very good margin for economy that Java simply doesn't allow. There's just too much repetition all over the place: of iterations, of getters and setters, of patterns, of faked closures ("new Runnable() { public void run {…", not to mention when you must create dummy final variables or when you need a return value), etc. The language simply isn't abstract enough. In truth, I'm a bit surprised I'm coming to this conclusion myself. I just "see" it, coding similar stuff day in and day out.

Just to give a simple and very common example, I can't count the number of times I've written something like:

public class aClass {
private int myInt;
private String myString;

public int getMyInt() {
return myInt;
}
public void setMyInt(int i) {
if (i < 0) throw new IllegalArgumentException("i must be positive");
myInt = i;
}

public void getMyString() {
return myString;
}
}
This is the same boilerplate code all over the place, for every single class (that has setters and getters, obviously.)

Consider this shorter form (fictitious):
public class aClass [int myInt, readonly String myString] {
myInt: value < 0: "i must be positive";
}
(I'm not saying this form is the best we could come up with, it's just to make the point.)

10 significant lines shortened to 2, not to mention the insignificant lines that waste screen real estate. But the powers that be at Sun would ridicule this shorter form as nothing more than syntactic sugar. True, it's syntactic sugar, but one that saves you a hell of a lot of typing! Don't discard this as laziness on my part. Consider instead the number of hours that are collectively wasted around the world by programmers typing the longer, unnecessary form, whereas the compiler could just as easily infer it from the shorter one.

(And no, the fact the Eclipse provides some wizards to help is not good enough and doesn't mitigate the need for a good, succinct and abstract language. Indeed, the very fact that the Eclipse team thought a good idea to spend time developing these wizards is proof enough there is something wrong.)

2. Our internal procedures follow the waterfall model too closely. Yet, it's very similar to what other software companies do. I'm not completely against the "big design up front" notion but our lack of flexibility in that regard is a hindrance.

I don't want to trigger a debate on this but let me just clarify what I mean by "big design up front". The idea is that at many stages there should be discussion among stakeholders regarding the direction development should take, especially at the requirements and design stage. Obviously, requirements or design can be revisited and change later on but that doesn't prevent the need to discuss them initially. For instance our process requires that a design document be written before coding starts. True, in my experience nothing focuses a discussion more than a document explaining the proposed course of action (possibly including discarded alternatives). So writing a design document up front is the ideal way to go: we can all discuss the proposed design, tweak it until it satisfies us, and then code. However that's not very realistic except for the most trivial designs.

I for one can't work that way (and I've tried). English simply isn't structured enough like a programming language that I can think my design in English. To design I must conjure up classes, think of their interactions, mock up a client, extract commonality, etc. The best way for me to do this is to use Java directly. Obviously, I don't need to code everything but just stubs won't do either. I discover the design as much as I think it. Indeed, in the middle of this "designing", I often realize something that will alter it significantly: a library that doesn't work as intended, an unknown limitation, a brilliant simplification… I'm glad then I didn't write the document up front, because that would require a whole new round of documentation and discussing.

So, in a way, I apply the "up front" strategy one step further: I ensure my design will hold the water before writing a whole 30 pages document with figures describing it. Anyone who has had to change a design document after the fact knows the tediousness of changing figures and text everywhere to accommodate, say, a simple change in the inheritance hierarchy. In that respect, code is more succinct to describe such relationships than a document, so it makes sense to write some code first.

This actually contributes to my discontent. What if the language we used was more abstract, more succinct and less verbose than Java? Supposing coding itself would be more efficient with such a language, wouldn't design (using my method) be more efficient as well? And going one step further, could we review the design directly in code without going through the lengthy documentation phase? Perhaps not completely, but could a hybrid code & documentation be used at the review?

3. No matter how much time I invest working for an employer, I'm never really helping myself in direct proportion to my efforts. In fact, the more time I put in, the less relative benefit I get out of working for that company (overtime is not paid). The consequence is that I refrain myself from doing (too much) overtime.

This in itself is not really an issue but it has a perverse effect. Sometimes, I have the drive to go on, to continue working: I'm in the "zone". However, time is up and I go home. This is an additional "economy" that is not realized working for someone else. By that I mean that if I had received some benefit from it, I would have traded spare time with time coding, which would have advanced the project faster. To use an economic term, the "marginal" passion that would have led me to produce more is lost. (Obviously, this doesn’t apply for someone whose overtime is paid but I don't think that's prevalent in our trade.)


The problem with these points is that they are not specific to the company I work for. For instance, a search in a popular job posting site reveals there are currently 82 Java positions open in my area, 59 C++ positions, 29 C# positions. Ruby, Python, Lisp and Scheme (languages I consider more "advanced") get a grand total of 0 positions.

***

Honestly, I've never seen an "enlightened" software shop, those talked about by Graham and Spolsky. But that's great since I'll be able to start my own and compete against the non-enlightened ones, right? Because now that I've experienced first hand what it is to work for a non-enlightened software company I've got the itch to start my own.

The real question is this: are Spolsky and Graham right in a more general sense? Is it true that using better tools/languages and/or hiring top-notch developers and giving them great working conditions (starting with myself ;) gives you a real, tangible edge (and not just a marketing hedge in certain spheres)?

What if I start a company that develops, say, accounting software for individuals and small businesses? Or an IMS (IP Media Subsystem) server? Or point-of-sales software? For instance, the fact that my software is written in this and that language and that my employees are the best is not going to directly influence the small business owner looking for accounting software. Marketing will. (Note that I've intentionally left out from the list web applications and software used by programmers themselves, as we are more affected by the underlying technology than the rest of the population. Case in point: for some, reddit is great written in lisp but sucks terribly written in python.)

This question is very interesting to me given that I'd very much like to have it my own way by choosing the languages and tools I use and the colleagues I work with (although at this stage it would probably be a single person affair.) Especially so if it leads me to get my hand on the profits my employer reaps on top on what I cost and generate even more profit by being more productive.

However, I can't help thinking that Graham and Spolsky overhype their beliefs. Is the efficiency increase realized by choosing the "better way" really that significant? Or does it instead all boil down to something less flattering to our ego like which company has the better marketing?

Don't misunderstand me. I known they don't claim that just writing software without any form of marketing or organization around it will work fine. What they claim however is that the better way will give you a tremendous edge over the competition.

This distinction is very important to me because writing software, I'm excellent at, but marketing, is, well, another matter...

So, should I start my own software company after all?


This page is powered by Blogger. Isn't yours?