Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> For instance, I once saw a Silverlight app that took 20 minutes to initialize because it traversed a tree of relationships using REST. It started out O.K. but as the app grew more complicated it took tens of thousands of requests and an incredible amount of latency.

Not that I agree with that particular architecture, but my first question is: why did the Silverlight app discard the knowledge that it had worked hard to discover? If a technician spent 20 minutes figuring out how a thing worked, would he just willfully forget it?

> People who are building toy applications can blubber about "the decoupling of the client from the server" but the #1 delusion in distributed systems is that you can compose distributed operations the same way you compose function calls in a normal program.

The whole idea behind object oriented programming is that you don't compose functions, you let objects pass and respond to messages and record their observations (state).

> All of the great distributed algorithms such as Jacobsen's heuristic for TCP and Bittorrent have holistic properties possesed by the system as a whole that are responsible for their success.

Meaning that the "objects" observe and record information about the world and respond appropriately based on their (recorded) observations. Interetingly enough, Alan Kay said that TCP was one of the few things that was designed well from the start. (Or maybe it was IP, I'm trying to find a source.)

>Security is another problem with REST. Security rules are usually about state transitions, not about states. To ensure that security (and integrity) constraints are met, REST applications need to contain error prone code that compares the before and after states to check if the transition is legal. This is in contrast to POX/RPC applications in which it is straightforward to analyse the security impacts of individual RPC calls.

That's a good point about security rules and state transitions but I don't understand the problem, objects are supposed to be responsible for some kind of state and we should let them worry about their state transitions. Can you provide and example of error prone code that needs to compare the before and after state?

------------------------------------------------------------------------------------------------------------------------

REST isn't a scam but the tooling (programming languages and frameworks) just aren't there. Our languages, even the so-called object-oriented ones are still based around the idea of calling procedures. REST works when you're dealing with objects that respond to messages, not imperatives.

Alan Kay made the case for something like REST in his 1997 OOPSLA keynote[1] (preceding Fielding's dissertation by 3 years). The idea is that in really big systems (distributed over space and time), the individual components need to be smart enough to learn how to interact with the other components because the system is simply too big to accomodate static knowledge (and I say this as a fan of statically typed languages).

[1]http://video.google.com/videoplay?docid=-2950949730059754521 at 43:00



> REST works when you're dealing with objects that respond to messages

You mean GET PUT, POST, DELETE ? This is my criticism of REST. If we transform all our processing objectives into nouns which we can address with these verbs, we will arrive straight away in the http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom...

> The whole idea behind object oriented programming is that you don't compose functions, you let objects pass and respond to messages and record their observations (state).

But this was SOAP in the first place?


There is nothing in REST that suggests you can't have more verbs. The restriction is that they need to be potentially applicable to all resources. They can't be definitionally specific to a subset of resources, or you lose the benefits of a uniform interface.

The point of all of this is interoperability. Communication requires shared understanding, which is impossible if everyone invents their own nouns and verbs at whim.


This just sounds like a repeat of XML and similar debacles, where people tried to address design decisions outside the problem domain. XML is no more interoperable than a well-documented binary protocol. I'm sure REST is great for some applications, but this attitude that we can do all our design upfront is bad for the web. I mean, the browser just recently rediscovered interrupt-driven programming with the introduction of websockets. That's pretty embarrassing if you ask me.


REST is the opposite of upfront design. The big idea is that objects can, in an ad-hoc fashion, discover what services another object offers (HATEOAS). If an object chooses to remember the state and list of services it discovers (caching), the remote object can offer advice about how long to store the list (cache control).

In fact, REST specifically lets systems grow organically since each object never assumes knowledge about either state or state transitions, all knowledge is contingent and empirically discovered.


Do you know of a real-world working example of a HATEOAS system? This strikes me as an AI-Complete problem.


The intelligence driving the object doesn't have to be artificial. Usually the local object is acting as an agent for a human user. Common examples are REPLs and browsers.


You just described a bunch of design constraints. The basic OS facilities and the design of the Internet already make some design decisions for you. REST is adding more. That is designing things up front. People said the same thing about XML, and of course it's only advantage is some level of self-documentation (that is going to break down pretty quickly when things get complicated).


> Using REST over a real network is not equivalent to object-oriented programming. Don't be asinine.

Okay. I'll let Alan Kay argue the exact same point then.

http://video.google.com/videoplay?docid=-2950949730059754521

The whole video is worth watching but the relevant part starts at 43:00. This video was made three years before Fielding submitted his dissertation.

(note: this isn't an appeal to authority fallacy since he's making an actual argument.)


I am not interested in what Alan Kay has to say. I asserted that you specified a bunch of design constraints. You did. You tried to counter by pointing out that it's just OO design. However, I'm not interested in arguing over what categories things fit into. The point is that the network imposes limitations and you often need a domain-specific design to get around them. For example, HTTP is awful for soft real-time applications.


I was not countering the claim that REST doesn't entail design constraints since at the time I believed OOP itself is a (useful) design constraint (the biggest being no access to state variables, "getters" and "setters" are bad).

But now that I think about it, I realize why people have a hard time understanding REST. It's not a "design" constraint any more than OOP is. It's an implementation constraint. In the case of OOP I think people confuse those notions because they conflate designing a system with designing class heirarches. But OOP doesn't have anything to do with classes. A system is object oriented when the state is private, objects communicate by passing messages and methods are late-bound. Similarly, you don't really design a system to be RESTful, you simply commit to the idea that you never know — a priori — what methods will be available on a resource. You determine them by asking the object and those methods are only valid as long as the cache-control header says they are. Just about everything else follows from that.

> the point is that the network imposes limitations and you often need a domain-specific design to get around them

That's why REST emphasizes caching, stateless communication, the appropriate use of status codes, and the appropriate use of VERBs (idempotent vs non).

> for example, HTTP is awful for soft real-time applications.

I wouldn't write an OS kernel in Ruby either.


>But now that I think about it, I realize why people have a hard time understanding REST. It's not a "design" constraint any more than OOP is. It's an implementation constraint.

Your attempt to enforce some dichotomy between design and implementation is hopelessly misguided.

>That's why REST emphasizes caching, stateless communication, the appropriate use of status codes, and the appropriate use of VERBs (idempotent vs non).

These don't solve the major problems. If I want caching I can trivially implement it myself. It's a useless thing to implement at the architecture level.

Similarly for idempotency etc. It's all a huge academic wank. An appropriately designed protocol will solve all these problems without the constraints.

>I wouldn't write an OS kernel in Ruby either.

What smug yet clueless response. You're begging the question.

I'm frankly bored of talking to someone who knows nothing about protocol design, but defends HTTP as a "proven" technology. It clearly isn't, it fails on many levels, and the whole web stack is completely fucked. Good day.


> These don't solve the major problems. If I want caching I can trivially implement it myself. It's a useless thing to implement at the architecture level.

Legions of computer scientists and computer engineers disagree with you. Caching is fundamentally an architectural concern.

>I'm frankly bored of talking to someone who knows nothing about protocol design....

Please point to your widely commercially deployed protocols before accusing others of ignorance.

> ..but defends HTTP as a "proven" technology. It clearly isn't, it fails on many levels, and the whole web stack is completely fucked. Good day.

You've demonstrated an astounding lack of respect for pretty much everyone on this topic here, and I'm finding it incredulous myself to have to tolerate the amount of bitterness being emanated from you. Why are you participating here it all, unless you just want to tell us we're all clueless, and you can feel superior?


My last statement was rude. Please accept my apologies.


I've described object-oriented programming[1] where each object in the system has a URL.

[1] The Smalltalk variety of OOP which is about encapsulation, message passing, and extreme late binding.


Using REST over a real network is not equivalent to object-oriented programming. Don't be asinine.


REST's problem domain is the design of a distributed hypermedia system on a global scale. It's pretty general because it needs to be.

A "well documented protocol" (binary or not) by definition is interoperable within a certain context. The question is, how big is the scope of that context?

The point of the Web protocols is to provide a framework for evolving bits of agreement - not "big design up front", except for the essential bits that have proven themselves, or are essential to bootstrap communication. HTTP, MIME, and URI are those essential bits of agreement at the moment. Witness how HTML isn't as essential as it used to be, given the growth of JSON APIs and mobile native apps.


Except HTTP hasn't proven itself, except to a growing set of specialist programmers who don't know any better. The web is hideously unreliable. People have become accustomed to reclicking links and reloading pages. Aside from hyperlinks, the only "innovations" of the web already existed in a more efficient form in operating systems decades ago.

As for REST, I'm sure it's nice for some things. However, if you think interoperability is going to magically spring out of it you're ignoring the lessons of XML, which that having an (ostensibly) human-understandable format is not going to magically allow machines to become more interoperable. All it will do is allow people to make bad guesses about the meaning of documents instead of reading the documentation.


There's nothing magic about interoperability - its all about the architecture and the documentation. XML isn't an architecture, it's a tagged data format. I cannot understand your line of argument about how the two are related.

Similarly, to suggest that HTTP hasn't proven itself (compared to what?) or that "innovations" of the web existed decades ago (really!?) is nonsensical to me. It's the most widely deployed application protocol on Earth. It handles billions of dollars of transactions.

You claim the web is unreliable. I think that's a layer error. The issue is that networks are unreliable. That's not something one can paper over.


You put XML and REST in different categories. So what? They are both attempts to provide systems interoperability for a huge domain. What we go out of XML: some ability to browse the format with standard tools, and some human-readability . That is all we will get out of REST. If that's the point, so be it. If you want to use it, so be it. I might use it for some projects. However, to put it forward as something new and special is being willfully ignorant of history.

>Similarly, to suggest that HTTP hasn't proven itself (compared to what?)

Compared to writing your own protocol that is appropriate for your application. The fact that people think HTTP is good enough has meant the browser has only recently acquired the ability to have a proper duplex channel without polling. Sorry, but that's just pathetic.

>or that "innovations" of the web existed decades ago (really!?) is nonsensical to me.

Name one thing that can't be constructed more efficiently and flexibly on the desktop, aside from web links.

>It's the most widely deployed application protocol on Earth. It handles billions of dollars of transactions.

So what? Because it's popular that means it's good? I used to be dismissed because as long as 7 years ago I was telling people we needed sockets and proper client/server in the browser. Now I'm getting the last laugh as I watch the browser vendors take decades to slowly do this stuff. At each step, it is labeled "innovative" and interesting and hyped up beyond all comprehension. The progression of this trend is retarded by the insistence of inventing stupid premature optimisations like caching as a basic architectural feature.

>You claim the web is unreliable. I think that's a layer error. The issue is that networks are unreliable. That's not something one can paper over.

Then you're ignorant of the potential of low-level protocol design. Sorry, but I absolutely can tailor my network protocol to my particular application. The web takes the position that everyone should use HTTP. It is a terrible protocol for many things. How are you going to design your soft real-time applications using HTTP? The answer is you can't, because it is impossible to get the right guarantees. HTTP has only "proven itself" in the same sense that Windows has; it was good enough at the time, so now it's blown up and everyone's using it. So what? I want more.


> So what? They are both attempts to provide systems interoperability for a huge domain [...] However, to put it forward as something new and special is being willfully ignorant of history.

Firstly, they were not both attempts at systems interoperability for a huge domain. XML was about building a specific format. REST is an entire architecture. It's the difference between designing a car and designing the interstate freeway system.

Secondly, let me get this straight. It's been a long standing goal for decades of both academia and industry to build a global scale interoperable distributed system. That this was done with distributed hypermedia, bridging languages, graphics formats, operating systems and computer architecture is not "new and special"?

> [HTTP isn't proven] Compared to writing your own protocol that is appropriate for your application.

Most hand-written protocols suck.

It's also these days rarely required given the strength of existing application protocols, and the widespread desire for interoperability, but clearly you don't value with that.

> The fact that people think HTTP is good enough has meant the browser has only recently acquired the ability to have a proper duplex channel without polling. Sorry, but that's just pathetic.

No, that's not pathetic, it's a consequence of the economics of scale. It is very difficult to economically sustain an event-driven internet scale system. e.g. http://roy.gbiv.com/untangled/2008/economies-of-scale

> Name one thing that can't be constructed more efficiently and flexibly on the desktop, aside from web links.

Firstly, most of what we do with computers these days is communication, processing, and commerce over web links.

Secondly, most data management applications are vastly more flexible, interoperable, and efficient with web technologies than they were with desktop technologies such as Access or Powerbuilder.

> I used to be dismissed because as long as 7 years ago I was telling people we needed sockets and proper client/server in the browser. Now I'm getting the last laugh as I watch the browser vendors take decades to slowly do this stuff.

I wouldn't be laughing yet. WebSockets is a sideshow that's not going to change a whole heck of a lot of how web apps are built. There will be interesting uses for it, but it's ultimately limited in scale and scope due to a lack of shared design constraints.

> Sorry, but I absolutely can tailor my network protocol to my particular application.

Sure you can, but you're basically making a value judgement that interoperability is of no concern to you, and that reuse is of no concern to you.

I mean, why use TCP, when we can just roll our own transmission layer on UDP? There are times that's needed (RTP), but we get a lot of productivity benefit with TCP.

> HTTP has only "proven itself" in the same sense that Windows has; it was good enough at the time, so now it's blown up and everyone's using it.

Sorry, that's just nonsense. The web has been a vast success story for global interoperability, and that can be directly attributable to the design constraints embodied in the main protocols of the web (HTTP, URI, and MIME).

I highly suggest you read Roy's thesis and reflect before postulating your opinions on this subject, since you really don't seem to have any appreciation for the amount of thought and engineering that went into the Web.

I would be more than happy to read to any sources you may have of what other protocols and/or techniques are clearly superior to the Web protocols (presuming I retain interoperability at scale as a major value).


SOAP wants you to call procedures and therefore you need a static list of procedures to call. An object, on the other hand, simply responds to messages. If you send it a message it doesn't understand it can respond with something like METHOD_MISSING which nicely translates to HTTP as 404.


Yes I want a static list of procedures to call. GET, PUT POST and DELETE are to coarse grained.

Method missing is in my book completely differrent from Page Not Found. Method != Resource.


404 doesn't mean "Page Not Found". It simple means "Not Found".

This isn't pedantry, the result of a request (message) for an object is the representation of the state of that object. 404 is an application level error saying that the remote object has no method for dealing with the message. Obviously that also means it's not going to be able to send you a representation of the state of some object that was the response to the message.

Also, REST doesn't preclude having a static list of messages. It's just prescribes that it's up to the remote object to report the messages to which it can respond and up to the local object to decide to statically record those messages somewhere so that it doesn't have to unnecessarily ask again. The remote object can even helpfully tell you when that list of "procedures" might expire so that the local object can know when it should ask again.


404 means the requested URI is not found, not "I can't/won't use this method".

For "I won't use this method on the resource you specified" you need a 405

For "I do not understand this method" you need a 501.


Well, this is a textbook case of a semantic divide: deciding what the codes should mean for your application. This is similar to figuring out whether NULL should mean "not applicable" or "not known" for a particular database table.

Now, I don't really think it's worth arguing and I can certainly see why you would use those codes that way but for me I've always treated 4xx codes as being about the resource/object and 5xx codes as being about the object's environment (i.e. the server). Said another way, I treat 5xx as runtime exceptions and 4xx as application error messages.

I'd never write code that returned a 5xx error from within my application for instance, that's for the server to handle.

405 means that the object knows about the method but for some reason won't allow you to call it. In OOP parlance, you would use it if someone tried to call the "Drive()" method without first calling the "StartEngine()" method. Indeed, RFC-2615 prescribes that you return a list of valid methods if you return a 405.

Thus, the only real candidates for something like METHOD_MISSING are 400 and 404. But 400 seems like it's for something that the application can't even understand. Like if the client is using some weird encoding in the host headers or something.


Are you saying SOAP won't tell you it doesn't understand if you send a bad message?


> I don't see any reason why SOAP can't support dynamically changing sets of procedure calls. Furthermore, I think that is a bad idea since it will break backward compatibility. No amount of web-buzzwording will get around that problem.

Of course enclosing a message in a SOAP envelope does not preclude RESTfulness. If the set of messages to which an object responds is a function of its current state then it's RESTful (or at least lead to a RESTful design).


So what you're admitting is that REST doesn't add any new novelties in terms of being discoverable?


REST doesn't offer new novelties because the basic idea is at least as old as Smalltalk.

REST isn't even about discoverability. That's just a natural consequence of calling methods that might only exists when an object is in a particular state (i.e. the object creates the methods at runtime).

The difference between REST and (usual) SOAP is the difference between late-binding and "extreme late-binding"[1]

[1] http://www.google.com/#sclient=psy-ab&hl=en&source=h...


I didn't mean to say that (but I guess it looks like I did). Obviously there are other ways if singnalling receipt of a bad message (throwing exceptions for instance). METHOD_MISSING seems more natural to me at least.

What I meant to say is that in REST, the set of messages to which an object will respond is a function of the object's current state. With SOAP, a URL endpoint always takes the same set of messages (until the code is rewritten or something).


I don't see any reason why SOAP can't support dynamically changing sets of procedure calls. Furthermore, I think that is a bad idea since it will break backward compatibility. No amount of web-buzzwording will get around that problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: