Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

REST is not a scam, it's THE term for the underlying architectural style of the web.

Your straw-man interpretation is a scam.

The crappy silverlight app is badly designed and the performance issues you were seeing can be optimised away trivially. (create a new composite resource which can represent the entire tree in one go). That is nothing to do with REST.

"the #1 delusion in distributed systems is that you can compose distributed operations the same way you compose function calls in a normal program" - This is exactly what REST addresses as a style.. i.e. REST is not RPC.

Latency is another issue which REST addresses directly. This is why caching is an explicit constraint.

I have no idea what your point is re: security, sorry. Modelling state transitions is not that difficult, many languages have existing tools to help you do this.



People say stuff like "[REST is] THE term for the underlying architectural style of the web", but that assertion is pretty much back-rationalized onto the web. I know I'm not the only HN'er who was there at the beginning for HTTP, and this conceptual purity that RESTians allude to just wasn't there.

For as long as there has been a web, there has been at least one (often more than one) conceptual ideal claimed to be the heart of the World Wide Web. The reality is that the whole system is hacked together. Things that work well tend to be discovered, re-discovered, and perpetuated --- but that doesn't make any of them the web's "underlying architectural style".

I tend to buy the idea that REST-ish-ness makes for better, clearer, more usable APIs than RPC. But I also think REST needs to win on the merits, not by waving some imaginary "REST is the web" flag.


The reality is that both are true - there was an "HTTP Object Model" that TBL and Roy Fielding had in mind in the 90s, but it was never formalized until Roy's thesis. Not everyone on the various IETF and W3C committees understood or agreed with it, clearly, and there are a lot of hacks in practice. There were attempts like HTTP-NG to supplant HTTP with CORBA-like Object RPC, but the arguments against this were basically informal arguments in favour of the uniform interface of URI + HTTP + MIME. Even today, the current HTML5 leads don't particularly seem to have much appreciation for the style.

That said, starting with the Apache web server, Roy had a lot of control with its architecture and approach to supporting particular HTTP features, using his mental model of the "Web's architectural style" to guide it. After 2000 and the thesis, there are many people that started implementing their clients and servers with the style as a philosophical guide.

The style also tends to be widely misunderstood and buzzword-ified, which is scammy.


I'm not sure I buy the argument that being a key contributor to Apache gave Fielding a huge amount of influence over the architectural style of web applications. The barn door on that was opened with CGI, and since then, the architecture of web applications mostly been up to the market and the lowest common denominator of programmers.


I was not referring to the internal architecture of web applications inside an origin server. I was referring to the architecture of a web application across the various components (agent, proxy, gateway, server), connectors (client, server, cache, resolver, tunnel), and data elements (resource, identifier, representation, metadata) in the style.

The architecture of those interactions has remained fairly stable over the past 15 years, even specs that route around REST like WebSockets are trying to at least adopt the HTTP upgrade & tunnel connector (HTTP CONNECT) to ensure it integrates consistently with the style.


are you saying you are not convinced that the dissertation holds water?

afaict, REST is (right now) the undisputed way to interpret the web's architecture.


I'm saying what I said in my comment.

Do you disagree with any of it? I might be wrong, so a specific disagreement would be interesting.

An appeal to authority: less so.


honestly, I'm not sure what you were trying to say.

Yes it is a post-rational analysis of the web.. yes the web was not 'designed' - it evolved.. but do I think, because it evolved, that means it doesn't/can't have an underlying architectural style? no I don't. Do I think Fielding's analysis is sound? yep - and so do most others who have read it (which is a lot of people).

There is a big missing part of the dissertation though - it doesn't go into much detail about hypertext. Apparently Fielding had intended to include it but just ran out of time.


I'm saying something simple: that claims about REST being "the underlying architecture of the web" do not matter, and are moot. The web doesn't have an "underlying architecture", or if it does, it's much simpler than even REST.

The web of 1995 clearly did not work the way Fielding describes REST.

That doesn't make REST bad, but it does mean you have to advocate for it on the merits, not by waving a flag.


Tom, in what ways did the web not work the way Roy describes REST? I'm honestly curious why you think this.

All of the elements he describes in the thesis were deployed by 1994 and codified in the first internet draft for HTTP 1.0: http://tools.ietf.org/html/draft-fielding-http-spec-00

I am generally in agreement that it's best to advocate REST on the merits, but I also think you may be making light of the constant barrage of attacks against the Web architecture through the 90's (with CORBA) and early 2000's (with SOAP) that were defended against by a small core of people, some of whom (Mark Baker) did so at great personal and health expense.

There's a reason people wave the flag - we can't keep arguing complicated topics from first principles if we are ever going to progress.


I think what you're hearing me express is skepticism that the (self-appointed) elite that makes standards or writes dissertations about architecture is in any way the final word on what constitutes the web. Without the hundreds of thousands of crappy applications that actually populate the web, it would be an academic exercise.

Regardless of what the RFC says, the CGI interface that provided the foundation for the original wave of web apps was just a series of getenv()'s. It was absolutely not the case that web developers in 1995 were careful to make sure that they represented state consistently or provided discoverable resources. Then as now, most web applications (and APIs, where they existed) were black boxes with parameters defined by expediency.

Also, even Fielding doesn't claim that RPC interfaces are wrong. He seems to have a much bigger (and more reasonable) problem with people pretending that RPC interfaces are REST. There are plenty of interfaces that make just as much sense in RPC as they would in REST; it's not unlikely that there are plenty of them that make more sense expressed as RPC.

I also don't understand the emotional appeal. CORBA vs. REST isn't a moral issue; people who sacrifice their health to make sure we're not using SOAP have their priorities screwed up.

(Thomas, please, btw).


Thomas, sorry.

I would actually think Roy (and the actual elite that built HTTP in the 90s) would agree with you on your first point - he defines REST, but that doesn't define the Web, which is hundreds of thousands of scrappy applications. REST was intended to cover the common case of the Web so that the standards themselves will be optimized to the common case. The "self appointed" elite are wrong to suggest that REST is the Web, but to me it's just as wrong to suggest that REST doesn't provide a useful abstract model of what makes the web work.

Regarding CGI, yeah, it was a hack, and it worked. But look at how website design evolved -- sometimes those websites sucked because they changed URIs all the time breaking bookmarks, search engines, etc. These practices weren't thought of in academic terms, no, but the point is that there was a theory underlying what emergent properties made websites good and popular (stable URIs for example, which led to better search, and bookmarking), which led to mainstream aphorisms like "cool URIs don't change" in the 97-98 timeframe. This was eventually codified academically in Roy's thesis a few years later. My point is that this wasn't completely accidental, there were people actively advocating "good practices" in the 90s, even without a formal published theory - it was informally understood by those who were on the mailing lists.

And I agree, RPC isn't wrong, the point is that it is fundamentally flawed approach if your goal is interoperability at a global scale. If you want client A to talk to server B, have at it, just don't call it REST or assume you'll get the desirable properties. Many so-called REST APIs aren't globally interoperable due to this conceptual baggage, but some are better than others.

As for the emotional appeal, my point is... these debates have been discussed ad nauseam for nearly 15 years, with many cases involving personal sacrifice (justified or not). Debating on the merits is fine, but having ot go back to first principles every 5 years for a new generation means we are just spinning our wheels. That's why there often is appeal to history or authority. i.e. "Read the mailing lists, they're archived", etc. That's not a great situation, I admit. But most new technologies have traditionally had a "vendor engine" behind them pushing marketing, training, books, etc. to perpetuate the meme. That seems to be less common with the open source / internet vendor world, or maybe it's just that REST is still too new and misunderstood.


I can't find anything to disagree with here (well, I could pick nits about RPC not scaling). Damn you.


can you really get 'much' simpler than REST already is? it's only 5 constraints.

Why is it significant that the web evolved between 1995 and when Fielding defined REST?

To most people, the paper is well-reasoned and REST is observable in the web where the constraints appear to produce their supposed beneficial effects. I think it's reasonable to see REST as the underlying style of the web, and the web has some very beneficial properties (scalability, evolvability) which therefore stand to REST's merit.


None of this has anything to with what I said, or with the criticism of REST that you replied to originally. I don't agree with his take on REST (at least not entirely) but it wasn't lazy. This argument, which uses the string "Fielding" at least once per comment, is lazy. That's all I'm saying.


(create a new composite resource which can represent the entire tree in one go)

This is the problem I face with REST all the time. I have yet to see someone pull this off gracefully. Take the hierarchy Authors/Books/Characters. If I wanted the full list of all Authors, with all Books, with all Characters what are you suggesting I do? Now what if there was another level after that? And after that?

This simple example could work with the use of a 'depth' value which has been suggested. But it doesn't work all the time. Especially when there are forks in the hierarchy and you want to go deeper in one and not the other. Basically I've determined that it seems impossible to have a pure 'model' of your data and an efficient API.


    Take the hierarchy Authors/Books/Characters. If I wanted
    the full list of all Authors, with all Books, with all
    Characters what are you suggesting I do? 
The service could have a /characters resource, which returns a list of all characters, along with the author/book.


Or the top level document looks something like (in JSON):

{ "characters": "<uri>", ... }

Doing a GET on the supplied characters URI gets you a list of characters, each with a mixture of relevant properties and a URI for the character itself so you can interact with it directly.


odata is one solution, I think.


(1) The "composite resource" which initializes the application is bound specifically to the application. This codesign greatly simplifies the implementation of an application that uses async comm (Javascript, GWT, Silverlight, etc.) because you'll never get communications choreography right unless you stick to the "one user action -> one request -> update UI" paradigm.

(2) Cacheing is part of the problem as much as it is part of the solution in making reliable apps. Anyone who's actually written AJAX apps knows that you frequently need to use tricks to disable cacehing to get things working right.

(3) As for security, this isn't just state transitions in the Turing sense but could involve numberic ranges and other kinds of variables. Generally code that compares composite objects is the kind of code that hides errors, particularly given the fact that for most languages there is something funky about the equals operator. (This is certainly true about C++, Java and PHP)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: