distributed systems

Internet Computing Call for Special Issue Proposals

January 22nd, 2008  |  Published in distributed systems, integration, performance, publishing, REST, reuse, scalability, services  |  Bookmark on Pinboard.in

As you may know, I’m a columnist for IEEE Internet Computing (IC), and I’m also on their editorial board. Our annual board meeting is coming up, so to help with planning, we’ve issued a call for special issue proposals.

The topics that typically come up in this blog and others it connects to are pretty much all fair game as special issue topics: REST and the programmatic web, service definition languages, scalability issues, intermediation, tools, reuse, development languages, back-end integration, etc. Putting together a special issue doesn’t take a lot of work, either. It requires you to find 3-4 authors each willing to contribute an article, reviewers to review those articles (and IC can help with that), and a couple others to work with you as editors. As editors you also have to write a brief introduction for the special issue. I’ve done a few special issues over the years and if you enlist the right authors, it’s a lot less work than you might think.

As far as technical magazines go, IC is typically one of the most cited, usually second only to IEEE Software, as measured by independent firms. I think one reason for this is that it has a nice balance of industry and academic articles, so its pages provide information relevant to both the practitioner and the researcher.

Lying Through Their Teeth: Easy vs. Simple

January 14th, 2008  |  Published in CORBA, design, distributed systems, REST, WS-*  |  Bookmark on Pinboard.in

I have to say that I agree with Ryan Tomayko on this one.

Among other things, Ryan touches on one of the favorite assertions of the REST detractors, which is that REST can’t be effective without an interface/service/resource definition language. After all, without such a language, how can you generate code, which in turn will ease the development of the distributed system by making it all look like a local system? Not surprisingly, the first comment on Ryan’s blog entry is exactly along these lines.

As I’ve been saying for years, trying to reverse-map your programming language classes into distributed services, such as via Special Object Annotations, is an attempt to turn local design artifacts into distributed ones, which we learned long ago is just plain wrong. You often end up paying for such shortcuts in areas such as reliability, flexibility, extensibility, versioning, reusability, and especially scalability.

Back in the halcyon days of CORBA, we generated code from OMG IDL, but IDL is not a local design artifact. OMG IDL was designed from Day One to define distributed systems (though we did add the “local” keyword to IDL sometime around 1999 or so to allow for easier local call optimizations). Note also that unlike the usual approach to defining WSDL, we never reverse-generated IDL from C++, Java, or any other programming language (though a questionable group eventually did come along and, trying to ride the Java popularity wave, define an OMG standard reverse IDL mapping for Java, despite strenuous objections from a number of us, including me). IDL also allowed for generating code in different programming languages for different parts of the same system. But the RPC roots of CORBA, its interface specialization requirements, and the inflexibility of the generated code, especially with respect to versioning, ultimately limited CORBA’s possibilities when it came to medium- to large-scale systems.

Proponents of definition languages seem to assert that such languages help with understandability. Such languages, they say, are required because they alone tell you how to invoke the service, what to pass to it, and what to expect in return. The problem with the way they make this assertion, though, is they make it sound like the application figures all that stuff out on its own with no human involvement. What happens in reality is that an actual human programmer sits down, reads the interface definition, more than likely reads some comments in the definition or a whole separate document that describes the interface in more detail, and perhaps even talks to the person who wrote the interface definition in the first place. Based on the knowledge gained, he then writes the application to call that interface. Similarly, with REST, you read the documentation and you write your applications appropriately, but of course the focus is different because the interface is uniform. Depending on the system, and assuming REST as implemented by HTTP, you might also be able to interact with it via your browser to help understand how it works, which I’ve found extremely valuable in practice (and yes, this works for application-to-application systems that are not designed primarily for browsers or human consumption). But ultimately, there’s no magic, regardless of whether or not you have a definition language.

What the proponents of definition languages seem to miss is that such languages are primarily geared towards generating tedious interface-specific code, which is required only because the underlying system forces you to specialize your interfaces in the first place. Keep in mind that specialized interfaces represent specialized protocols, and IDL was developed oh so long ago to generate the nontrivial code required to have RPC applications efficiently interact over such protocols, since back then computers and networks were far slower and less reliable than they are today, and getting that code right was really hard. When you have a uniform interface, though, the need to generate interface-specific interaction code basically goes away.

(BTW, the first IDL I ever saw was at Apollo, where it was used not only for RPC in the Apollo Network Computing System (NCS) but also to define Domain/OS header files once and generate them into their C and Domain Pascal equivalents, rather than writing and maintaining them twice, once for each language.)

Some REST proponents like WADL. I’ve looked at it but haven’t used it, so I can’t really comment on it. I’ve never felt the need to seek out a resource definition language of any kind for my REST work, at least to date. YMMV.

BTW, on a somewhat related note, I still use CORBA, contrary to what some jackasses out there would like you to believe. In some industries, certain CORBA interfaces are standardized and even legally enforced. In others, leading players have defined CORBA interfaces for 3rd-party integration. These interfaces work, so those companies and industries have no intention of changing them to another technology anytime soon, and in fact they simply have no need to change them at all. I’ve had to work within some of these CORBA scenarios lately, and I have to say I’ve found it to be fun, like meeting up with an old friend you haven’t seen in awhile. I’m sure many of these interfaces could be done better with REST, but they work as is, and there’s just no need to throw them out. Coincidentally Ryan spoke of CORBA when he responded to the commenter mentioned above. All in all, I remain proud of my CORBA work over the years, as we did a lot of good stuff back then, even if since then we’ve found simpler ways of doing a few things.

“Internet SOAP” vs. REST: Huh?

December 28th, 2007  |  Published in distributed systems, REST, scalability, SOA  |  Bookmark on Pinboard.in

Dilip Ranganathan pointed me to a long rant from Ganesh Prasad about using SOAP at Internet scale. I see that Stu Charlton already chimed in there with some good comments and analysis, but I think there’s still more to say.

Unless I’m missing something, Ganesh seems to be saying, “Hey, if we just stick SOAP directly onto TCP, we can scale beyond Web scale to Internet scale!” Oh, if it were only so easy. I would think that it’s fairly obvious that just because TCP scales well doesn’t mean that higher-level protocols sitting on top of it automatically scale to the same degree.

Why does the Web scale so well? Because of particular constraints deliberately imposed to induce specific architectural properties. The caching constraint contributes heavily to Web scalability, for example. Statelessness and the uniform interface also play a big role there. These constraints along with conditional GET allow messages to be significantly reduced in size or better yet, eliminated altogether. The resulting scalability impact is huge.

Ganesh talks about a lot of the things you’d have to add to the mix get a useful SOA ecosystem on top of SOAP/TCP, but nowhere does he talk about the specific architectural properties and constraints required to make it all scale. Without that, it just ain’t gonna happen. Furthermore, I don’t believe any system based either on interface specialization (i.e., the opposite of the uniform interface constraint) or on “processThis” can scale to Web scale. Interface specialization significantly increases coupling while reducing visibility and applicability, while “processThis” is so devoid of semantics that it offers nowhere to practically apply constraints like caching and statelessness that are so critical to scalability.

Answers for Sergey

November 17th, 2007  |  Published in distributed systems, dynamic languages, REST, WS-*  |  Bookmark on Pinboard.in

Sergey Beryozkin, a former coworker at IONA, has posted some questions specifically for me. Questions are good; it’s almost like being interviewed, and it’s certainly better than being called sad and depressing! Let’s see if I can provide some answers.

But first, I want to address something I keep hearing when I post things like the answers below: “Steve, you’ve changed your whole story ever since you left IONA!” While the technical opinions I’ve been expressing in this new blog might seem shocking since they seem different from what I used to say, they’ve actually been under development for a long time. What actually happened is

  1. My technical opinions changed gradually over the course of 5 or 6 years while I was still at IONA.
  2. I finally chose to leave because my preferred technical direction had diverged significantly from where IONA was going.
  3. I left when I did because of a wonderful opportunity that came up at a new place where I could put all my latest ideas to work. So far, it’s all I hoped it would be, and more.
  4. I couldn’t publicly blog about my new opinions, write about them in my column, or present them at conferences until after I had changed jobs, for obvious reasons.

If you had enough time and patience to read through all my IC columns, you’d certainly find plenty of evidence that my thinking about all this stuff had already been changing for years prior to my departure from IONA. I hope this clears things up for those of you who mistakenly think I’ve just done an abrupt about-face.

Now, on to Sergey’s questions:

1. Do you think client code generation is evil ? If yes, do you expect people to do manual programming on a large scale ?

In days long gone by, code generation clearly helped us get a grip on certain types of distributed systems, and make advances as a result. But since then we’ve learned that you can’t pretend a distributed system is a local one, that RPC is less than desirable, and all kinds of other lessons.

All in all, I am now of the opinion that it’s generally best to avoid distributed system development based on language-first approaches (unless it’s a language like Erlang where distribution is effectively built in, but even then you still can’t ignore distribution issues). Code generation is based on the notion that I want to develop my distributed system using the idioms and practices of my programming language, which I now believe is almost always wrong. If you avoid specialized interfaces and use a uniform interface instead, it significantly reduces the need for code generation. On top of that, using standard MIME types for exchanged data rather than making up your own data types in WSDL or Java pretty much eliminates the need for code generation in many cases, IMO.

Now, I have to say that your question seems to imply that code generation is required for large-scale development. I don’t see why that would be the case. “Manual” programming isn’t much of a chore when you choose the right language.

People seem to get really upset when I say that the static typing benefits of popular imperative languages are greatly exaggerated, and when I say that developing real, working systems in dynamic languages is not only possible, it’s preferable. I usually find that, coincidentally, those are the same people who’ve never honestly tried the alternatives. Some who actually do try the alternatives do so by trying to use the new language exactly the same way they use their favorite language, and when they fail for what should be obvious reasons, they blame the new language.

2. If code generation is acceptable, would you welcome WADL? If yes, what to do with generated client types with respect to the change management ?

I’ve personally never encountered a need for WADL, but that’s just my opinion and experience. Marc Hadley and others obviously find it useful, and the RESTful Web Services book promotes it, and they’re all smart folks, so maybe there’s something to it.

Either way, no interface definition language is ever going to keep you or some other real live person from having to figure out what the service actually does and how to actually use it, and then coding your client accordingly. Normally you figure out that sort of thing by reading some sort of human-readable document, not just by looking at WSDL, WADL, or any other IDL. So if you have to read a document, and you’re avoiding code generation thanks to the uniform interface and the use of standard data formats, why bother with any IDL?

3. Do you think the idea of interfaces is broken ? Do you see any point in creating minimalistic yet not generic interfaces with encouraging users to focus on data ?

I believe that it’s important to avoid specialized interfaces whenever possible and prefer a uniform interface instead. Interface specialization, even of the minimal variety you mention, inhibits reuse. My Jan/Feb ’08 IC column, which is not yet published but is already written, covers this in detail.

4. Would you expect in the future even software modules interacting with each other through a common generic interfaces ?

I assume you’re referring to modules residing together within a single address space. To some extent that’s already been happening for years, thanks to frameworks, which of course are also based on interface uniformity. But will we ever get to the point in the foreseeable future where all entities residing within a single address all have the same generic interface? No.

Remember, REST is an example of applying well-chosen constraints to achieve desired architectural properties for a broad class of distributed systems, and so that’s what its constraints are all about. It might be an interesting brain game to consider what properties REST’s constraints could induce within a non-distributed system, but I’m not sure the exercise would be of any practical benefit.

5. “WS-* was simply not worth it to any customer or to me” – was it not ?

The customers that I saw benefit from WS-related technology gained those benefits only because of IONA-specific innovations that were not part of WS-*. In general, WS-* didn’t do anything for them that wasn’t already possible with prior technologies.

6. Do you think WS-Policy is a useless technology ?

I personally have no use for it.

7. Do you think AtomPub is the best way to do REST ? Is AtomPub better than SOAP ?

AtomPub fits a particular class of problems, but it targets only a subset of what REST can be applied to, so it doesn’t solve everything (and neither does REST, of course). I don’t believe AtomPub and SOAP are directly comparable, but I personally haven’t found a use for SOAP, other than when I’m forced to talk to systems that offer only SOAP-based interfaces.

In 1999-2000 I thought SOAP was finally going to help glue the CORBA and COM worlds together and make for a happy integrated place. But as I learned more and more about REST, starting at roughly the same time, I saw the light that SOAP was missing the boat, big time, by abusing HTTP.

8. What is a better way to protect investments made into WS-* ? Throw them away and start from scratch or start embracing the WEB while improving on what can be done with WS-* ?

I’m not sure there’s a general answer to this question, as it’s not really a technical question and it depends too much on particular business situations. Personally, if my current job involved WS-*-based systems that I had any say over, I’d throw them away, but I’m in a startup and we could easily do that. Thankfully, though, we have no such systems.

9. Do you think an “integration” problem IONA has been so good at is an “overblown” problem ?

I think that in the whole scheme of things the integration problems that IONA is good at solving are not very common. Nevertheless, those problems are very real; that’s IONA’s niche, and they’re great at solving them. The very bright developers, customer engineers, and sales engineers at IONA regularly come up with wonderful well-engineered and budget-pleasing solutions when nobody else in the industry can. But if you work in that environment day after day, it’s only natural that you might start to think that all integration problems look like those uncommonly difficult ones. In the big picture, that’s just not the case.

10. Can you please, if possible, describe a bit what kind of (software) clients will use your RESTful systems, (Web)UI tools or client software modules pushing the data up the response chain ?

In general, the company I work for is in stealth mode, so there’s virtually nothing I can tell you. But I will say that when you’re building RESTful resources, you tend not to think of the browser and other client software differently — they’re both just clients. If you’re primarily doing Web development and maybe sprinkling in some support for programmatic clients for good measure, that’s different. Either way, anti-REST folks commonly claim that REST’s success is due only to the fact that there’s a human-driven browser in the mix, but that’s one of the dumbest things I’ve ever heard.

11. What is the difference between service factories found in Corba and RESTful services creating new child resources as part of POST (as far as managing child resources is concerned) ?

As far as managing child resources is concerned, the difference is that CORBA has no uniform interfaces or data representations. The CORBA client therefore has to be specifically coded to be able to manage the newly-created resource.

12. Do you always prefer dealing with fine-grained resources ?