objects

Defending Something Other Than RPC

May 24th, 2008  |  Published in CORBA, distributed systems, HTTP, messaging, objects, RPC  |  Bookmark on Pinboard.in

Josh Haberman takes me to task for my previous posting:

Steve Vinoski has come out very vocally against RPC in the last few days…

Actually, I’ve been saying similar things for years now, Josh, not just the last few days. For example, I noted problems with RPC in my Mar/Apr 2008 IEEE Internet Computing column entitled “Demystifying RESTful Data Coupling.” I noted problems with RPC in my Sep/Oct 2005 column entitled “RPC Under Fire.” I noted problems with RPC in my Jul/Aug 2002 column entitled “Web Services Interaction Models, Part 2: Putting the ‘Web’ Into Web Services.”

His blog entry basically makes fun of Cisco for inventing/releasing another RPC system. It’s not clear exactly what he thinks they should have done instead.

I think my posting pretty clearly implies that Cisco should have avoided writing their own and instead should have reused something that already exists.

What is strange about this criticism is that tons of technology companies have developed their own RPC system — Facebook and Cisco publicly, and other technology companies I am familiar with in a not-so-public way. Guess what: large commercial distributed systems are built largely on RPC. Is he arguing that all of the engineers at these companies simultaneously got the bad idea of investing in something they don’t need? If RPC is such a bad idea, then why is everybody doing it?

Is everybody really doing it? Are large commercial distributed systems really built largely on RPC? I’ve seen some non-trivial CORBA-based deployments over the years, but in my experience large systems are built using approaches other than RPC. Like the Web, which isn’t RPC. Like email, which isn’t RPC. Like pub/sub enterprise messaging systems, which aren’t RPC.

Let’s consider the definition of what an RPC actually is. The term is often misused to mean “a synchronous call to another system over the network.” This is not what an RPC is. For example, an HTTP request is synchronous, but it is not an RPC. RPC, rather, is a specific approach for developing networked applications where local calls wrap and hide operations that happen to be carried out on another system across the network. For starters, let’s check Wikipedia:

Remote procedure call (RPC) is a technology that allows a computer program to cause a subroutine or procedure to execute in another address space (commonly on another computer on a shared network) without the programmer explicitly coding the details for this remote interaction. That is, the programmer would write essentially the same code whether the subroutine is local to the executing program, or remote.

Next, let’s check RFC 707, where RPC comes from, in which James E. White specifically proposed a procedure call model for networked applications designed to hide the network, and thereby allow developers to use familiar approaches to developing applications that happened to perform network operations. Quoting from that RFC:

Ideally, the goal of both the Protocol and its accompanying RTE is to make remote resources as easy to use as local ones. Since local resources usually take the form of resident and/or library subroutines, the possibility of modeling remote commands as “procedures” immediately suggests itself. The Model is further confirmed by the similarity that exists between local procedures and the remote commands to which the Protocol provides access. Both carry out arbitrarily complex, named operations on behalf of the requesting program (the caller); are governed by arguments supplied by the caller; and return to it results that reflect the outcome of the operation. The procedure call model thus acknowledges that, in a network environment, programs must sometimes call subroutines in machines other than their own.

and also:

“The procedure call model would elevate the task of creating applications protocols to that of defining procedures and their calling sequences. It would also provide the foundation for a true distributed programming system (DPS) that encourages and facilitates the work of the applications programmer by gracefully extending the local programming environment, via the RTE, to embrace modules on other machines.” This integration of local and network programming environments can even be carried as far as modifying compilers to provide minor variants of their normal procedure-calling constructs for addressing remote procedures (for which calls to the appropriate RTE primitives would be dropped out).

Josh continues:

Yes, on a network sh*t happens, and no sane RPC system will try to hide this from you.

As you can see from the original definition of RPC, something called an RPC that doesn’t hide the network is, by definition, not an RPC. As I said above, unfortunately the term is often misused as meaning “synchronous messaging,” and that incorrect usage seems to be what Josh is defending. Josh then says:

But then again, I don’t know of any RPC system that tries to hide this from you except possibly CORBA.

That’s not correct either. What CORBA actually does is make everything appear remote, even local objects, but does so in a way that allows object request broker (ORB) implementations to bypass much of the overhead of remote invocations when the ORB knows that a target object is local. Still, not all the overhead can be eliminated due to object lifecycle and method dispatching requirements, meaning that such local calls are typically never as fast as true local calls. DCE also treats services as always being remote, but last I checked it included no local bypass optimizations (though a variant called OODCE once did this, IIRC). But either way, what’s important with these systems is that calls within your code look just like any other calls within your code, whether they’re calling remote operations or not. And that’s RPC.

Regarding versioning problems, Josh says:

But any RPC framework worth its salt makes it possible to have different interface versions interoperate. Adding a new parameter? No problem, old servers simply won’t see it. Completely changing the semantics of your call? No problem — just give the new call a new name.

Yes, Josh, there are generally ways to do versioning in such systems, but they’re not very good. CORBA includes some facilities to help with versioning, but in practice they don’t actually help that much. Both COM and CORBA promoted interface inheritance and runtime interface negotiation (called “narrowing” in CORBA) as a way to do versioning, which works, but only for a restricted set of changes. Add a parameter to an existing call? Sorry, no can do, unless your marshaling format carries complete information for the entire call including parameter names, types, and positions, and also versions each parameter, all of which systems like CORBA, DCOM, and DCE specifically do not do due to the large overhead it adds whether a given application uses it or not, and in CORBA’s case also because of the interference it can cause for local dispatching optimizations. All in all, versioning is hard, not only for RPC, but for distributed systems in general.

Middleware and distributed systems veterans are well aware of the arguments like the ones I’ve made in my blog and other places recently and in various publications over the years; such arguments are generally common knowledge among us, and have been for years.

Cisco’s system is not available yet, but when it comes out, I’m quite certain you’ll find, Josh, that it’s the same old thing, just repackaged in a new box.

Serendipitous Reuse

January 5th, 2008  |  Published in column, objects, REST, reuse, services, SOA  |  Bookmark on Pinboard.in

Reusability is often promoted not only as a goal but also as a feature of all kinds of software architectures, designs, and systems. For example, in the CORBA, WS-*, and SOA worlds I formerly haunted, everyone spoke nonchalantly of reuse as if it were a given. You were supposed to simply identify the objects or services required to support your business processes, and then specify their interfaces. Then, anyone wanting to provide one or more of those objects or services was merely supposed to follow the appropriate interfaces and write implementations for them. Applications were written to the interfaces and thus were automatically decoupled from the implementations. As a result of these reusable interfaces, you could also potentially reuse the objects and services that implemented them, as well as the applications that consumed them.

Problem is, it never seemed to work out as easily as that. Most of the time, the interfaces people came up with were just too specific, and nobody could agree to apply them widely. Think of all the time people spent over the years in OMG, JSR, and W3C WS meetings trying to agree on just the infrastructure interfaces and not always succeeding. It’s therefore not surprising that there was never much success at defining broadly-accepted standard interfaces up at the application level; the scope is simply far too wide up there.

So, with technologies that promote interface variability, planned reuse is pretty hard. Consequently, serendipitous reuse, where services and facilities can be combined and reused beneficially in unforeseen ways, is virtually out of the question.

Stu Charlton’s blog was where I first saw those terms used, and I found them very enlightening. So did Bob Warfield.

One of the first things that attracted me to REST was the uniform interface constraint. Mark Baker first brought it to my attention nearly 8 years ago, and before I looked at it, I thought it was just a bad idea, like a totally generic doIt() interface, devoid of any meaningful semantics. But of course, there’s much more to it than that. The HTTP interface, for example, strikes a great balance that allows it to be efficiently reused across a very wide variety of applications. And when such a uniform interface is reused, the applications that use it stand a much better chance of being reusable themselves than if they were written against a non-uniform application-specific interface.

My latest Internet Computing column, entitled Serendipitous Reuse (and here’s the pdf if you prefer), explores how the uniform interface contributes to reuse of both the planned and serendipitous kinds.