Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Most of this has little to do with REST at all, e.g. URLs don't actually matter in REST (because of HATEOAS) and should be considered opaque. This is more HTTP API best practises (best practise in the author's opinion). REST has become a meaningless term.


This debate was settled almost a decade ago. URL forms don't matter in theoretical dissertation REST, but 15 years of practice has firmly decided they are indeed a core part of real-world REST.

HTTP based web services existed before Roy's dissertation and while the formalizing of them helped them mature rapidly, the idealistic principles aren't the final word on the subject anymore. Real world practice now is.


URLs don't matter for 99% of actual REST usage. Your browser doesn't care if you're POSTing to /posts/new or to /jeskew. "APIs" are the exception.


That's because the modern web itself is a REST API, whereas most of the things that call themselves "REST APIs" aren't.


APIs are what we are talking about.


My point is that APIs are an exception mostly because developers are conditioned by the RPC "APIs" mental models, which doesn't take effect when they're thinking about a "website" instead, not due to some fundamental technological difference.


And having extra irrelevancies to worry about such as "Version your API", "Use nouns, not verbs", singular versus plural, and nested resources are the consequences of being "practical".


Yes. As a hacker who leans on books, blogs, and well-defined principles, RESTful api design resources have often been much more http RPC oriented.

Hackers should pay attention to:

http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hyperte...

I have also found collection+json and hypermedia application language (HAL) to be useful.

Plus, I have spent a bunch of time reading the Restful Web APIs book from O'Reilly. Useful for me, as a relative newbie, in providing some logical foundation to start from . . .


http://roy.gbiv...

Money quote:

REST is intended for long-lived network-based applications that span multiple organizations. If you don’t see a need for the constraints, then don’t use them. That’s fine with me as long as you don’t call the result a REST API.


Do you have an example of an API that meet's Fielding's definition of REST?


tl;dr: Not that I've dug into.

I've found a number of people who point at the Github api as a good example of a RESTful api that includes hypermedia links for clients to follow rather than constructing URI requests manually. It looks good to me.


> This is more HTTP API best practises

I'd disagree. If I were critiquing a non-REST API over HTTP, I wouldn't get upset about requiring a call to POST /processOrder at all, whereas I'd begin to convulse if they said it was a REST API.


That is more of an indictment on how poorly REST has been presented. It remains mostly impossible to get a very good definition of REST or point to an API/whatever that is a good example of REST.


> REST has become a meaningless term.

Indeed, and this is why I think "RESTful" APIs are a huge problem - they're largely undefined. We have SOAP for defining these things formally without the need to worry about implementation differences. Unfortunately it was essentially abandoned because it was too complicated and excessive. It would be really interesting to see something SOAP-like with some more modern features and made a bit simpler.


I'm sorry, did you just suggest something SOAP-like with more features? Because SOAP is too complicated?


> I'm sorry, did you just suggest something SOAP-like with more features?

I suspect that where you seem to have read "more {modern features}" but ultramancool meant "{more modern} features".

That is, I think the intent was that the features of the otherwise SOAP-like thing would be more modern than the features of SOAP, not that SOAP-like thing would have more features than SOAP and that those additional features would be modern.


Indeed, this is a correct assessment. Sorry about my poor phrasing. In particular I was thinking things like using the HTTP protocol correctly in a similar manner to more "modern" RESTful APIs, removing some of the unnecessary things and allowing for alternatives to XML like json for size and ease of access from web platforms.


Simple Object Access Protocol was too complicated.


Uhm, I mentioned that in my post. Thanks for re-iterating? Personally, I thought it was pretty simple, but I come from a land of CORBA. I don't think the ultimate solution for "this structure is too complicated" should be "fuck structure altogether" though. Rather, we should try to arrive at a simpler structure.


I wasn't criticizing your post. I was spelling out the name, because the tension between "Simple ..." and "... too complicated" amused me.


It's been noted before, The S Stands for Simple: http://harmful.cat-v.org/software/xml/soap/simple


"It's been noted before"

Many times.


URIs matter and REST matters because people have been using them incorrectly for so long. Common mistakes like putting verbs in URIs instead of using HTTP methods. You are correct though, about the design of verbs don't matter (outside of conventions) as long as everything is done correctly.


Just off the top of my head: if you think of verbs exposed in URLs as a common REST mistake, you may not have fully absorbed the HATEOS concept. If you have HATEOS, you're unlikely to have verb URLs.


I agree, that is what I was trying to say. If you are doing things correctly (fully absorbed the concept), everything will be fine and no one has to argue about REST, HTTP, and URIs.


any good resources to HATEOS?


I quite like this article from Martin Fowler on the "Richardson Maturity Model" that works up to HATEOS:

http://martinfowler.com/articles/richardsonMaturityModel.htm...


Thank you.

I was expecting more from this HATEOS stuff. Everything I read before sounded like full auto-discovery of APIs.

But the only thing seems to be including possible next URLs in the responses.

Don't get me wrong, this is a good thing. It gives the backend devs more freedom and the frontend devs need less documentation to find out what's possible. But everyone still has to write the interfacing code to these APIs :D


At the risk of butchering the concept for the sake of simplicity:

In a HATEOS API, clients need to know exactly one entry point endpoint. Nothing else is hardcoded. There is no Python code that happens to know "if you want to add a widget to a product, you POST to /product/$X/widgets". The API itself tells you where to go.

An acid test: assume your API entrypoint is /api. If your API keeps HATEOS kosher, you could in a serverside update change every other URL endpoint in the application without breaking clients, because the clients would be getting those URLs from the entrypoint URL dynamically anyways.

(That's not really the point of HATEOS, but it's a side effect).

There is nothing wrong with simple HTTP APIs, and a lot wrong with explicitly RPC (verb) oriented APIs in general, so adhering to REST principles isn't an absolute good.

This might be the point where 'dragonwriter tells me I, too, have misunderstood HATEOS. :)


One thing that's always bugged me (in a small way) about REST is that proponents/experts always insist that REST does not rely on any specific protocol (HTTP) but all discussions of REST carry a very strong assumption that specific actions are mapped to specific HTTP verbs. For example, Martin Fowler's doctor appointment scheduling example gives you "discoverable" hypermedia links for canceling and editing appointments, but they use the same URI and there is an implicit assumption that the client knows how to distinguish between the two by choosing the appropriate HTTP verb. It just seems kind of strange to say, well, REST isn't tied to HTTP, it's tied to any request/response protocol where each request is bound to a specific URI and one of these core HTTP verbs. Wouldn't implementing REST on any other protocol look an awful lot like tunneling HTTP over that protocol?

Another small gripe is the notion that a REST client need not have URIs to specific resources/actions hardcoded in them. The fact that you don't hardcode the specific URIs but rather a bunch of link strings that you then use to look up URIs makes this a lot less interesting. The way it's described generally makes it sound as if there is some kind of magical mechanism by which a client actually learns of the existence of a given endpoint, which would truly be magical. Really, all that's happening is that a client knows a name for a specific endpoint that it's looking for, and the API provides a way to look up the specific URI for that endpoint. Makes things tidy, but it doesn't seem like a feature that has much practical impact if you follow a "URIs shouldn't change" philosophy anyway.


Without breaking clients permanently. Running clients would not be able to continue their current interaction.


Yeah, I was just about to go edit that. "Cool URIs don't change" and all that. It would still be bad to change URLs; you just wouldn't need to update the client API code.


The most useful resource -- and its quite concise -- on HATEOAS is this 2008 blog post from Roy Fielding (who defined REST, so it's straight from the proverbial horse's mouth):

http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hyperte...


well, its just wikipedia, but its actually well written page on the concept, in my opinion

http://en.wikipedia.org/wiki/HATEOAS

key concept:

> "The principle is that a client interacts with a network application entirely through hypermedia provided dynamically by application servers. A REST client needs no prior knowledge about how to interact with any particular application or server beyond a generic understanding of hypermedia. By contrast, in a service-oriented architecture (SOA), clients and servers interact through a fixed interface shared through documentation or an interface description language (IDL)."

a truly RESTful service that follows the HATEOAS pattern doesn't require documentation to be hosted separately. it will supply all the information necessary directly through the RESTful service.


> a truly RESTful service that follows the HATEOAS pattern doesn't require documentation to be hosted separately.

It might need documentation of the special media types it relies on to be hosted separately (one place where most "REST" APIs fail to follow HATEOAS is that they reuse generic media types but rely on out-of-band descriptions of the "real" format of the data, so that a client familiar with only the media type and the data would not semantics of the resource representations being returned by the API.)


Hm okay.

The key concept I understood but I don't get how clients and servers have to be, to just "get" each other in the way HATEOAS implies.


Think web browsers. REST was modeled after the existing web.


Web browsers have a human intelligence driving the interactions. API clients don't. Hence the huge gap, and as far as I can see, the rather pointlessness of HATEOAS.

In fact, I'm still totally lost as to the usefulness of REST at all, except as a generic term to mean RPC over HTTP except not as clunky as SOAP. Which isn't what REST is. I've yet to see or use an API that was easier to deal with because it was REST.


The Googlebot drives REST APIs just fine, and would be impossible to write using an RPC model.

I've yet to see or use an API that was easier to deal with because it was REST.

Of course not, because people actually want to use RPC, and so shoehorn REST into RPC-like models, which destroy its usefulness.

If you're sitting at your computer and deciding that you're now going to write a client against Service A's API, the point of REST was missed, and Service A might as well have used RPC.

The point of REST is to decouple the client from the specific service, using the Uniform Interface and standard formats to allow clients to interact with any service that "speaks" the same formats.

But nobody's is thinking on those terms. Everyone is still thinking that it's perfectly normal and OK to waste years of developer time rewriting the wheel, over and over again, for each new service that pops up. This is fueled by the services themselves, of course, whose companies want to use their API to lock you in.

So no, while this is the normal mentality, you won't see any major gains from REST.


Is probably something you already know. Consider this: in your HTML home page you link to some page to do some action, tomorrow you create a new page and add the link to it to your home page, magic done, your generic client (web browser) is now able to show the user a new functionality, no need to change anything on the client. If you have a native Android App (that does not use the browser) you probably need to update it.


... That only works because there's a human driving the browser. I've yet to hear of a concrete example of how this would apply to API clients.


> ... That only works because there's a human driving the browser.

It works for unattended web clients (like Google's spider) too -- and not just for generating basic listings, but for structured schema-based data to update Knowledge Graph. That's one of the foundations of many of the new features of Google Search in the last several years.


it works because links are defined in the hypertext and discovered by clients (say by the browser when a page is visited), so are new functionalities. A (well designed) web app is always up to date. In an Android native app the API URL(s) are (99%) hardcoded using the knowledge about the API of a certain moment. This auto-discovery mechanism works also for a spider.

Auto discovery does not mean that links are understood (@rel may help but..) you may need a human to decide but..

Suppose a (rest) application that lists links to its "services" in home page with the "service" page describing the service following a certain standard. You may have a bot that checks periodically the application for services you are interested in and be notified if a new service is available, with the possibility to navigate to the page and possibly subscribe.


two points:

1. why do you necessarily assume that REST API's are only accessed by robots? A human developer can benefit from HATEOAS quite a lot by being able to use the RESTful service's outputs as its own documentation. The developer can discover the features of the API by following links provided in the API outputs.

2. An API client can check that it matches the interface specified by the API just by comparing the URI's it is accessing with the URI's provided by the HATEOAS part of the RESTful service. You can automatically detect changes, breakages, or new feature introduction. This doesn't spare the client developer from having to update their client code, but it gives that developer a powerful tool for getting updates about the RESTful service.


So basically, putting structured comments in the API output would have the same effect? Instagram does that. They don't update their API docs, and instead put comments in the results so you might stumble on them while debugging But specifically to hyperlinks, I don't see the point. For instance, a search API might want to return s next link. So they can do that with a URL. Or they can just include a next token I pass to the search API. The latter is somewhat easier to program against since you often abstract the HTTP/URL parts.


REST was in the original HTTP spec. Most people were just doing it wrong the entire time until recently when it became trendy to go back to the root REST ideals. And by most people I mean everyone involved in SOAP and RPC and other nonsense like that.


> REST was in the original HTTP spec.

No, it wasn't. Fielding's dissertation, in which REST was defined, argues that a certain set of principles were an underlying foundation of the structure of the WWW architecture in its original construction, proposes REST as a formalization of and update to those principles, and proposes further that updates to the WWW architecture should be reviewed for compliance to the REST architecture. [1]

So REST is a further elaboration of a set of principles inferred from the original HTTP spec, not something present as such in the original HTTP spec.

[1] http://www.ics.uci.edu/~fielding/pubs/dissertation/web_arch_...


I must disagree. REST is defined by four interface constraints: http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch... and identification of resources is the first principle. This functionality is implemented by URLs in HTTP and is therefore critical. HATEOAS (the 4th principle) also seems to be gravely misunderstood. It only means that the API should have a version that is fully useable in a web browser.


> HATEOAS (the 4th principle) also seems to be gravely misunderstood. It only means that the API should have a version that is fully useable in a web browser.

No, it doesn't. In fact, a REST API can fully meet the requirements Fielding lays out without having any implementation that uses any communication protocol used by any web browser. REST is an architectural pattern that is independent of communication protocols.


Hi dragonwriter, is there a way that I can contact you with a few questions about REST that aren't appropriate for here? Twitter, IRC, email?


What the URLs look like is not a concern of REST though.

Can you cite your latter point, since that's contrary to how I've seen HATEOAS explained? Many libraries do expose a browser-navigable form of the API, but I can't see how that's how it's originally defined nor it being a requirement.


Are you saying that it's just as good to have a URL be /fj3849-2qfjimpa as /customer? I agree that it is not strictly a REST principle, but in an API clarity matters. If we are talking about good practices, would your rather work with an REST API with readable, concise URLs or not? Maybe I am missing the point but the URL is one of the major building blocks of REST and choosing quality ones is a large part of API design.

I stated that HATEOAS is widely misunderstood so it isn't surprising that there are conflicting descriptions out there. No matter what your interpretation, this principle is about hypermedia and application state and says nothing about what a URL should or should not be.


> Are you saying that it's just as good to have a URL be /fj3849-2qfjimpa as /customer? I agree that it is not strictly a REST principle, but in an API clarity matters.

No, it is strictly a REST principle that the meaning of URLs other than the entry point is defined completely by the context in which they are used in resource representations and the definition of those resource representation (i.e., media types).

An implementation of a REST API may happen to present URLs with a consistent relationship of location structure to semantic meaning in the API, but that's entirely outside the scope of the REST API per se.

> If we are talking about good practices, would your rather work with an REST API with readable, concise URLs or not?

If you are using HATEOAS, URL format, readable or not, isn't a feature of the API at all (its a feature of a particular implementation of the API, but its one that clients don't need to be aware of.)

> Maybe I am missing the point but the URL is one of the major building blocks of REST and choosing quality ones is a large part of API design.

You are missing the point. URLs as opaque identifiers is one of the major building blocks of rest, and if you are worried about choosing one as part of "API design", you aren't building a REST API.

The resource representation (media type) should tell you what the URLs used in it are for, not the URLs themselves.

> I stated that HATEOAS is widely misunderstood so it isn't surprising that there are conflicting descriptions out there. No matter what your interpretation, this principle is about hypermedia and application state and says nothing about what a URL should or should not be.

If you need to communicate the URL structure and identify relative URLs for various endpoints to describe an API, it is not a REST API following HATEOAS; "A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API)." [1]

HATEOAS clearly is misunderstood, as you've just demonstrated.

[1] http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hyperte...


Incorrect. Resources are identified by URIs, but they are opaque, n.b. The Opacity Axiom

"The only thing you can use an identifier for is to refer to an object. When you are not dereferencing, you should not look at the contents of the URI string to gain other information."

http://www.w3.org/DesignIssues/Axioms.html#opaque


His dissertation is quite out of date with respect to current practical experience. Some of the ideas turned out to be great, others largely irrelevant. It was very useful at the time but don't forget it was realistically based on less than a handful of years of real industry experience. And most of that was in reaction to SOAP/SOA wars.

Before and during the time it was written there were existing HTTP based web services that were successful and they didn't follow all the principles outlined in the dissertation. Fast-forward 15 years and the vast majority of successful HTTP based web services still don't follow all the principles. Some turned out to be more useful than others.


I've always interpreted "hypertext" in this context to mean "a document containing hyperlinks" in a rather general sense - nothing about it being restricted to HTML.


My statement that HATEOAS is misunderstood is corroborated over and over by the replies to my comment. HATEOAS being misunderstood is so weird because it is an acronym of its own definition so I have no idea where all these other definitions are coming from. Let's examine it closer. HATEOAS stands for:

Hypermedia as the engine of application state

Hypermedia means HTML, period. Putting a list of URLs in a text or JSON response does not magically make it hypermedia.

Engine of application state means that all representations of the resources must be possible.

Combine those two and it means that all functionality must be accessible to and from the text/html content type. It must be able to handle all supported HTTP verbs and no fancy request headers that are not supported by HTML forms or hyperlinks.


> Hypermedia means HTML, period.

Wrong. Any resource representation that can contain links to other resources with semantic identification of the relationship they have with the current resource is hypermedia. HTML is particularly popular, but far from the only hypermedia format.

> Putting a list of URLs in a text or JSON response does not magically make it hypermedia.

No, not magically, but if you have URLs with identification of their relationship to the current document in JSON, it is hypermedia. Hypermedia is older than HTML, and extends well beyond it.

> Engine of application state means that all representations of the resources must be possible.

No, it doesn't. HATEOAS does not mean that all resources must have hypermedia representations, it means that all available actions on the application state (whether read actions or write actions) must be identified to the client through hypermedia. (E.g., through links communicated in hypermedia resource representations wherein the semantics of the representation and the media-type of the resource in which it appears and which applies to the linked resource define the available actions.) A resource representation to which access is provided via a hypermedia link but which does not define additional linked resources need not be in a hypermedia format (that is, some resources may be purely defined by things like straight -- not hyper- -- text resources, or images, or whatever.)

> Combine those two and it means that all functionality must be accessible to and from the text/html content type.

It doesn't mean that at all. What HATEOAS means is laid out most clearly by Roy Fielding in a blog post responding to the way in which his definition of REST had been misunderstood on this point: http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hyperte...

REST isn't about HTML or the Web, though the Web and HTML are the inspiration for the definition of the REST architectural style. REST (and HATEOAS more specifically) requires neither HTML, nor any of the other specific technologies (HTTP, etc.) that define "the Web".


"Hypermedia means HTML, period."

Um, no. XHTML and SVG are hypermedia.

JSON isn't hypermedia, but you can define hypermedia formats that use JSON (just as SVG uses XML).


True, but with so much confusion over a blatantly clear definition I figured I needed to keep it simple.


I think that is at least in part because for a few years, every time someone thought they had a RESTful system, some genius would come through and declare it non-RESTful on account of not following some rule or other. So "RESTful" came to mean a type of HTTP API that tends to look a certain way. And so of course URLs matter.

Saying something is "RESTful" is a surefire way to find out the ways in which it fails to match up with what some obscure document says.


I agree. Links must be in the response, that's the point. Also I dont' understand this hierarchical path mania that brings one to make complicate things where URIs are parsed / produced. I use the query string for this in general, is much easier, only if I have some special needs I don't.


you're right, this is really about making a good JSON api that maps CRUD operations to HTTP verbs. REST is often used as a shorthand for that, though its not technically 100% accurate.

that said, it was still a good article. I agree with the author on his best practices for JSON API design.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: