Archive for January, 2008

IDLs vs. Human Documentation

January 16th, 2008  |  Published in code generation, CORBA, documentation, IDL, WSDL  |  Bookmark on Pinboard.in

Patrick Mueller responded to my previous blog posting on interface definition languages, and I wanted to comment on his response. Long ago Patrick was involved in defining the Smalltalk bindings for CORBA IDL, so he’s a CORBA veteran like me, and in the big picture we agree on many things. It’s nice having him cast a critical eye on this stuff.

Note that Patrick mostly talks about data schemas, whereas my posting talks only of interface definition languages. These are two very different things, which I’ve noted in comments on his blog. In a reply comment he said they’re both metadata, which is true, but still, they’re very separable. REST depends heavily on data definitions, but it doesn’t require specialized interface definitions because it promotes a uniform interface. For data definition REST relies on and promotes media/MIME types, and the standardization of such data definitions is critical to allowing independently-developed consumers and providers to interact correctly. I doubt Patrick and I really disagree on this last point.

One area where we apparently do disagree, though, is in the area of documentation. In my previous post I said that users of services ultimately rely on human-generated and human-readable documentation, not interface definition languages, to ensure their consuming applications interact correctly with those services. Patrick commented:

The documentation? What documentation? I’m picturing here that Steve has in mind a separate document (separate from the code, not generated from the code) written by a developer. But human generated documentation like this is still schema, only it’s not understandable by machines, pretty much guaranteed to get out of sync with the code, probably incomplete and/or imprecise. Not that machine generated schema might fare any better, but it couldn’t be any worse.

But there are more problems with this thought. The notion of hand-crafted documentation for an API is quaint, but impractical if I’m dealing with more than a handful of APIs.

I understand what Patrick’s saying here. Yes, documentation can get stale and out of sync. Still, I disagree. I’ve been near interface definition languages for at least 20 years now, and never once — not even once — have I seen anyone develop a consuming application without relying on some form of human-oriented documentation for the service being consumed. Such documentation might be as simple as a conversation with a developer across the hall, or reading comments in the definition language file itself, or might be from a README, email, a web page, a wiki, a Word document, a PDF, or a whole formal specification. I mean, what if the OMG had published only the ORB and Object Services IDL interfaces without the accompanying reams of human-oriented description and definition? Or if WSDL were enough, why the need for so many pages of human-oriented WS-* documentation?

Like I said in my previous post, interface definition languages exist for machines to generate code. They’re totally inadequate, though, for instructing developers on how to write code to use a service. The need for human documentation in this context isn’t quaint or impractical at all — it’s simply reality.

Lying Through Their Teeth: Easy vs. Simple

January 14th, 2008  |  Published in CORBA, design, distributed systems, REST, WS-*  |  Bookmark on Pinboard.in

I have to say that I agree with Ryan Tomayko on this one.

Among other things, Ryan touches on one of the favorite assertions of the REST detractors, which is that REST can’t be effective without an interface/service/resource definition language. After all, without such a language, how can you generate code, which in turn will ease the development of the distributed system by making it all look like a local system? Not surprisingly, the first comment on Ryan’s blog entry is exactly along these lines.

As I’ve been saying for years, trying to reverse-map your programming language classes into distributed services, such as via Special Object Annotations, is an attempt to turn local design artifacts into distributed ones, which we learned long ago is just plain wrong. You often end up paying for such shortcuts in areas such as reliability, flexibility, extensibility, versioning, reusability, and especially scalability.

Back in the halcyon days of CORBA, we generated code from OMG IDL, but IDL is not a local design artifact. OMG IDL was designed from Day One to define distributed systems (though we did add the “local” keyword to IDL sometime around 1999 or so to allow for easier local call optimizations). Note also that unlike the usual approach to defining WSDL, we never reverse-generated IDL from C++, Java, or any other programming language (though a questionable group eventually did come along and, trying to ride the Java popularity wave, define an OMG standard reverse IDL mapping for Java, despite strenuous objections from a number of us, including me). IDL also allowed for generating code in different programming languages for different parts of the same system. But the RPC roots of CORBA, its interface specialization requirements, and the inflexibility of the generated code, especially with respect to versioning, ultimately limited CORBA’s possibilities when it came to medium- to large-scale systems.

Proponents of definition languages seem to assert that such languages help with understandability. Such languages, they say, are required because they alone tell you how to invoke the service, what to pass to it, and what to expect in return. The problem with the way they make this assertion, though, is they make it sound like the application figures all that stuff out on its own with no human involvement. What happens in reality is that an actual human programmer sits down, reads the interface definition, more than likely reads some comments in the definition or a whole separate document that describes the interface in more detail, and perhaps even talks to the person who wrote the interface definition in the first place. Based on the knowledge gained, he then writes the application to call that interface. Similarly, with REST, you read the documentation and you write your applications appropriately, but of course the focus is different because the interface is uniform. Depending on the system, and assuming REST as implemented by HTTP, you might also be able to interact with it via your browser to help understand how it works, which I’ve found extremely valuable in practice (and yes, this works for application-to-application systems that are not designed primarily for browsers or human consumption). But ultimately, there’s no magic, regardless of whether or not you have a definition language.

What the proponents of definition languages seem to miss is that such languages are primarily geared towards generating tedious interface-specific code, which is required only because the underlying system forces you to specialize your interfaces in the first place. Keep in mind that specialized interfaces represent specialized protocols, and IDL was developed oh so long ago to generate the nontrivial code required to have RPC applications efficiently interact over such protocols, since back then computers and networks were far slower and less reliable than they are today, and getting that code right was really hard. When you have a uniform interface, though, the need to generate interface-specific interaction code basically goes away.

(BTW, the first IDL I ever saw was at Apollo, where it was used not only for RPC in the Apollo Network Computing System (NCS) but also to define Domain/OS header files once and generate them into their C and Domain Pascal equivalents, rather than writing and maintaining them twice, once for each language.)

Some REST proponents like WADL. I’ve looked at it but haven’t used it, so I can’t really comment on it. I’ve never felt the need to seek out a resource definition language of any kind for my REST work, at least to date. YMMV.

BTW, on a somewhat related note, I still use CORBA, contrary to what some jackasses out there would like you to believe. In some industries, certain CORBA interfaces are standardized and even legally enforced. In others, leading players have defined CORBA interfaces for 3rd-party integration. These interfaces work, so those companies and industries have no intention of changing them to another technology anytime soon, and in fact they simply have no need to change them at all. I’ve had to work within some of these CORBA scenarios lately, and I have to say I’ve found it to be fun, like meeting up with an old friend you haven’t seen in awhile. I’m sure many of these interfaces could be done better with REST, but they work as is, and there’s just no need to throw them out. Coincidentally Ryan spoke of CORBA when he responded to the commenter mentioned above. All in all, I remain proud of my CORBA work over the years, as we did a lot of good stuff back then, even if since then we’ve found simpler ways of doing a few things.

Serendipitous Reuse

January 5th, 2008  |  Published in column, objects, REST, reuse, services, SOA  |  Bookmark on Pinboard.in

Reusability is often promoted not only as a goal but also as a feature of all kinds of software architectures, designs, and systems. For example, in the CORBA, WS-*, and SOA worlds I formerly haunted, everyone spoke nonchalantly of reuse as if it were a given. You were supposed to simply identify the objects or services required to support your business processes, and then specify their interfaces. Then, anyone wanting to provide one or more of those objects or services was merely supposed to follow the appropriate interfaces and write implementations for them. Applications were written to the interfaces and thus were automatically decoupled from the implementations. As a result of these reusable interfaces, you could also potentially reuse the objects and services that implemented them, as well as the applications that consumed them.

Problem is, it never seemed to work out as easily as that. Most of the time, the interfaces people came up with were just too specific, and nobody could agree to apply them widely. Think of all the time people spent over the years in OMG, JSR, and W3C WS meetings trying to agree on just the infrastructure interfaces and not always succeeding. It’s therefore not surprising that there was never much success at defining broadly-accepted standard interfaces up at the application level; the scope is simply far too wide up there.

So, with technologies that promote interface variability, planned reuse is pretty hard. Consequently, serendipitous reuse, where services and facilities can be combined and reused beneficially in unforeseen ways, is virtually out of the question.

Stu Charlton’s blog was where I first saw those terms used, and I found them very enlightening. So did Bob Warfield.

One of the first things that attracted me to REST was the uniform interface constraint. Mark Baker first brought it to my attention nearly 8 years ago, and before I looked at it, I thought it was just a bad idea, like a totally generic doIt() interface, devoid of any meaningful semantics. But of course, there’s much more to it than that. The HTTP interface, for example, strikes a great balance that allows it to be efficiently reused across a very wide variety of applications. And when such a uniform interface is reused, the applications that use it stand a much better chance of being reusable themselves than if they were written against a non-uniform application-specific interface.

My latest Internet Computing column, entitled Serendipitous Reuse (and here’s the pdf if you prefer), explores how the uniform interface contributes to reuse of both the planned and serendipitous kinds.

Some developer advice for 2008

January 2nd, 2008  |  Published in commentary, productivity  |  Bookmark on Pinboard.in

Via Tim Bray, a commencement address by Bruce Eckel. It’s worth reading the whole thing, but I found this part especially interesting:

An even more fascinating metric is this: 5% of programmers are 20x more productive than the other 95%. If this were a science, like it claims, we could figure out how to get everyone to the same level.

Let’s say that this follows the 80-20 rule. Roughly 80% of programmers don’t read books, don’t go to conferences, don’t continue learning, don’t do anything but what they covered in college. Maybe they’ve gotten a job in a big company where they can do the same thing over and over. The other 20% struggle with their profession: they read, try to learn things, listen to podcasts, go to user group meetings and sometimes a conference. 80% of this 20% are not very successful yet; they’re still beginning, still trying. The other 20% of this 20% — that’s about 5% of the whole who are 20x more productive.

The lesson here is that if you want to be a great developer, you’ve gotta put in the extra effort that Bruce talks about. There are no shortcuts. In my experience, I’ve seen that there are quite a few developers who rarely read things that pertain to their profession, never attend conferences or talks, and certainly never look into trying new approaches that are even the slightest bit different from what they already know. Well, unless they’re forced to, of course, via organizational changes or layoffs. I don’t understand why anyone would willingly choose a profession for which they’re unwilling to invest in continuous career-long learning.

I also like what he says here:

You need to pay attention to economics and business, both of which are far-from-exact sciences. Listen to books and lectures on tape while you commute. Understanding the underlying business issues may allow you to detect the fortunes of the company you’re working for and take action early. When I first started working I looked askance at people who paid attention to business issues — that was suit stuff, not real technology. But those people were the smart ones.

Another reason to pay attention to the business side is that it’s actually rare that the best technology wins. I used to struggle greatly with this, and over the years I’ve seen many developers do the same. Understanding how markets work and how technologies advance in the marketplace is important for every developer, so they can put their work in perspective and perhaps be a little less religious about it.

So, from these ideas, my two recommendations for 2008 are:

  1. Learn a new programming language or new approach that takes you out of your comfort zone.
  2. Study one or more technology-focused business books.

In both cases, you’ll be very glad you did.