A commenter named Nick left a thoughtful response to my post about my “Multilanguage Programming” column. Rather than respond to it with another comment, I thought I’d turn my response into a full posting, as I think Nick’s feedback is representative of how many people feel about the topic.
Nick said:
I would say that instead of spending a lot of time on a conceptually different language it could be more beneficial to study, say, distributed algorithms or software/system architecture principles or your business domain. There is so much knowledge in this world that learning how to code the same thing in, roughly speaking, one more syntax seems like a waste of time. Even paying real attention to what is going on in the cloud computing can easily consume most of one’s spare time.
I think there are assumptions here that are not necessarily true. Specifically, you’re not necessarily learning how to code the same thing in multiple languages; rather, the idea is that by choosing the best language for the task, coding is “just right” for the problem at hand. For example, I know from significant first-hand experience that if you want to write a set of distributed services that support replication, failover, and fault tolerance, the code you’d write to do that in C++ will be extremely different from the code you’d write in Erlang to achieve the same thing (well, actually, you’d be able to achieve far more in Erlang, in far fewer lines of code).
This is about much more than syntax. It’s about facilities, semantics, and trade-offs. If it were just syntax, then that would imply that all languages are equal in terms of expressiveness and capability, which we already know and accept to be untrue.
The cloud computing topic actually provides a good example of why knowing multiple languages can be useful. To use the Google App Engine, for example, you need to develop your applications in Python. What if you don’t know Python? Too bad for you.
From a real life perspective, it takes years or working on nontrivial software to master a language. For example, some people still manage to have only a vague idea of util.concurent — and this is just a small enough (and well explained in the literature) part of Java. How realistic is it to expect that the majority of developers will be able to master multiple languages concurrently?
I disagree that it takes years to master a language. One of the best OO developers I ever worked with was a mechanical engineer who taught himself programming. One of my current coworkers — a relatively young guy — started programming in Erlang only a few months ago, and he’s already writing some fairly sophisticated production-quality code. In 1988, I started using C++; by 1989, I was starting to help guys like Stan Lippman, Jim Coplien, and others correct coding mistakes in their excellent books. I have a BSEE, no formal computer science training whatsoever, and am completely self-taught as far as programming languages go. (The only class I ever had in any computer language was a BASIC class I had to take in 1981.) Two other coworkers started with Python just a few months ago and they do quite well with it at this point. I can cite numerous such examples from throughout my career. I don’t think any of us are super-programmers or anything like that, so if we can do it, I don’t see why it would be a problem for anyone else.
Perhaps you’re falling trap to the “huge language” problem I mentioned in my column. It certainly can take some people many years to master enormous languages like Java and C++, but most languages are simply nowhere near that big.
And who wants to maintain a code base written in widely different languages? Which most likely means multiple IDEs, unit testing frameworks, build systems (hey, not everyone is using even Maven yet), innumerable frameworks etc. And most of the interpreted languages among those are not even likely to run in the same VM. Not to mention the number of jobs asking for non-C++/Java skills.
I use a number of languages daily and I really have no trouble maintaining the code regardless of which language any particular piece happens to be written in, or whether I wrote the code or one of my teammates did. Once you know a language, you know it; switching to it is no more difficult than using your one and only language if you’re a monolingual developer.
You also mention the “multiple IDE” problem. The first draft of my column contained some fairly direct language describing my dislike of IDEs, or more accurately, my dislike of the IDE culture, but Doug Lea suggested I take it out, so I did. The problem is that some folks let the tool drive their solutions, rather than using the tool as a means to developing solutions. I’ve had numerous people tell me they won’t consider using a language unless their IDE fully supports it with Java- or Smalltalk-like refactoring. To me, that’s completely backwards. I’d rather use an extensible editor that can handle pretty much any language, thus letting me develop optimal solutions using the right languages, rather than having a mere editing tool severely limit my choice of possible solutions.
But there are language mavens and there are tool mavens, and they typically disagree. Follow that link and read, as the posting there is incredibly insightful. I am definitely a language maven; languages are my tools. I suspect, though, that Nick and others who raise similar questions to the ones quoted here lean more toward being tool mavens. I’m not passing judgment on either; I’m only pointing out the different camps to help pinpoint sources of disagreement.
As far as unit test frameworks, build systems, and frameworks go, I haven’t ever found any big issues in those areas when using multiple languages. The reason, not surprisingly, is that knowing multiple languages gives you multiple weapons for testing and integration. Ultimately, when you’re used to using multiple languages, you’re used to these kinds of issues and thus they don’t really present any formidable barriers.
And as far as jobs go, the best developers I’ve known throughout my career have been fluent in a number of programming languages, and each of them could work virtually wherever they wanted to. I don’t believe this correlation is mere coincidence.
Curiously enough, this argumentation is hardly ever mentioned. Authors tend to assume that developers are lazy or have nothing else to learn.
I don’t assume developers are lazy. Rather, I think our industry generally has a bad habit of continually seeking homogeneity in platforms, in languages, in tools, in frameworks, etc., and we really, really ought to know better by now. Once you learn to accept the fact that heterogeneity in computing is inevitable — since nothing can do it all, right? — you find yourself able to use that heterogeneity to your advantage, rather than continually battling against it and losing.
Personally, I am planning to look at Scala and probably Erlang but even judging from the number of books on those it’s clear to me that they represent merely a niche market.
Today’s niche market is tomorrow’s mainstream market. Regardless of whether either of those languages continues to grow, learning one or both will make you a better developer than you are today.
Consider the final question I ask in my column:
After all, do any of us really believe we’ve already learned the last programming language we’ll ever need?
I suspect the vast majority of developers would answer “no” to this question. Assuming that’s the case, then if you don’t regularly practice learning new languages, how do you know when you really need to start learning a new one, and how capable will you be of learning that next language when the need arises? The longer you stay with one language, the more isolated you become, typically without even realizing it. Shifting gears gets harder and harder. Then one day you look up and all the interesting work is elsewhere, out of your reach. Is that a position you want to put yourself in?