Updated 2009-06-23 22:17:39 by dkf

Read all about it over here:
    http://rest.blueoxen.net/ [ replaces http://internet.conveyor.com/RESTwiki/moin.cgi ]
    http://www.xml.com/pub/a/2002/02/06/rest.html
    http://internet.conveyor.com/RESTwiki/moin.cgi/RestFaq

This is what Roy Fielding, REST's coiner, says
The World Wide Web architecture has evolved into a novel architectural style that I call "representational state transfer." Using elements of the client/server, pipe-and-filter, and distributed objects paradigms, this style optimizes the network transfer of representations of a resource. A Web-based application can be viewed as a dynamic graph of state representations (pages) and the potential transitions (links) between states. The result is an architecture that separates server implementation from the client's perception of resources, scales well with large numbers of clients, enables transfer of data in streams of unlimited size and type, supports intermediaries (proxies and gateways) as data transformation and caching components, and concentrates the application state within the user agent components.

FWIW, this wiki is being adjusted according to the REST philosophy. Each page in this wiki now has one "official" way of being identified, i.e. this one is:
 http://wiki.tcl.tk/3513

No ".html" at the end, preferably. One other valuable access path is "tolerated" (both redirect to the above one):
    http://purl.org/tcl/wiki/3513  <== fallback, in case anything breaks

Other ways to reach this wiki are obsolete, including everything with "cgi-bin" in it.

-jcw

KBK (21 October 2002) - Please tell me that accessing pages by full title, e.g.,
    http://wiki.tcl.tk/REpresentational%20State%20Transfer%2c%20REST

is still acceptable?

Ah, yes... of course - numeric access is indexed, all the rest triggers a search, and remains fully supported - thanks for pointing this out. jcw

SC As I understand it, REST emphasises Resources which can be accessed via a URI and suggests using the http GET, PUT, POST and DELETE methods as per their original specs. Accordingly it should be possible to GET the contents of a page (a resource) either as HTML or Wiki Source or whatever other formats the wiki is able to deliver. Editing a page should be possible through a POST (as now) or PUT request, new pages should be added using PUT. Now, the current browser based interface uses HTML forms and POST because that's how you can provide an easy browser based editor. If the GET of Wiki markup and PUT/POST to upload edits were implemented one could imagine a local tcl app which could download pages from here, edit them and send them back -- sort of tkchat like. This wouldn't remove anything from the current interface, just add the possibility of new kinds of apps.

How about the following urls with different HTTP Accept headers:

DGP Why are you proposing different URIs for different formats of the same resource? Isn't this properly done in HTTP 1.1 by setting the Accept: header?

SC of course, I'd forgotten about accept. (I've modified the above accordingly, it originally had different extensions for different types). Now the Accept header dictates the type of data returned. There is a conflict with the existing implementation where POST to a URI uploads a new version rather than diffs, this could probably be managed via additional form fields though.

JE IMO, using different URIs for different formats is acceptable, and even preferable in many cases. HTTP Content negotiation (the Accept: header) is badly flawed; I liked the original scheme better. http://wiki.tcl.tk/3513.wml would always return the editable version, http://wiki.tcl.tk/3513.html would always return the HTML-formatted output. Content negotiation is a good idea for the case where there's no extension though.

(PS I'd love to see the GET *.wml / PUT interface implemented. Then I could whip up a 4-line Tcl client to download/upload wiki pages and use a decent text editor instead of this stupid textarea :-) MC: this inspired me to create a small script to allow Editing the Tcl'ers Wiki using an editor of your choice

23nov02 jcw - See [1] for PUT vs. POST. PUT takes a resource (page), and replaces it. The wiki is slightly special, in that page creation is triggered from a change on an existing page. So as I read REST, you can PUT to a page 1234, but in the wiki you cannot just invent page numbers, i.e. a PUT to page 6789 must be rejected. POST could be used to append text to a page (not creating anything, in terms of URI's). It could even be used to create pages: posting a page to the WIKI, not a specific page, sending a complete new page with title and content, and the returned URI would be to the page so created. I'm not sure this is essential in the context of a wiki - but it looks like a natural way to extend things.

Uploading a diff - not sure, there's some risk here. I feel much more comfortable with posting just text which gets appended verbatim, i.e. an efficient way to grow a discussion on a page. It also means we'd have the option to do useful things with permissions, i.e. add "protected" pages which only accept POSTs, not PUTs. Note that today, the wiki essentially supports POST as its only mechanism, in a way that should have been done with PUT if the REST mantra had been followed.

Btw, if making changes becomes trivial to script - then it may open some floodgates we may regret: we have all history, but do we really want to encourage / facilitate "batch vandalism"?

I agree that local editing would be grand. See my comments at the end of the Wikidiff page on how updates of a local wiki copy are already starting to become a reality. It seems like a natural step to support a new edit mode for local wiki's which are set up to be a replica of a central server. Such an edit mode would not just alter the local copy but connect and submit the page centrally as well. Even in today's system, such functionality ought to be very simple to implement.

Pertinent references:

  • short introduction [2]
  • overview [3]
  • design of REST: mistakes [4]
  • design of REST: .NET [5]
  • design of REST: RPC [6]
  • design of REST: ... extreme ... [7]
  • security ... [8]
  • security ... [9]
  • REST vs. SOAP [10]

A quote from the end of that last REST vs. SOAP page:
Finally, it bears repeating. Just because a service is using HTTP GET, doesn't mean that it is REST. If you are encoding parameters on the URL, you are probably making an RPC request of a service, not retrieving the representation of a resource. It is worth reading Roy Fielding's thoughts on the subject. The only exception to this rule that is routinely condoned within the REST crowd is queries.

Right now, the wiki is still mixing things up. One can edit a page through a GET - which is really meant to be used for requests which do not alter resources...

CMcC - 2009-06-22 21:17:37

ReST seems like it has a bunch of good ideas. However, hard-ReST isn't conducive to implementation, as PUT and DELETE method requests can't be generated at all from HTML, and can only be generated from javascript with difficulty.

So I would suggest that one must implement soft-ReST, coding the method into a POST or GET request, all the while remembering what one would do if browsers, HTML etc weren't broken by design.

dkf - 2009-06-23 04:03:23

I thought that the wiki was at least partially REST-conform. While you can retrieve the instructions for how to edit a page by GET, the actual edit transaction proceeds by means of a POST. (I've just double-checked this as I write this comment...)

CMcC - 2009-06-23 17:18:59

We have tried to keep the wiki ReSTful, yes, but only because it makes sense, and only insofar as it makes sense. I mainly posted here to record my sudden realisation that, however good an idea ReST is, it's not really implementable in its hard-form in the majority of applications (because you can't readily perform PUT or DELETE from all browsers.)

dkf - 2009-06-23 18:17:39

The other point I should have made is that it is important to differentiate between the model that the REST system is manipulating and the real data that backs it up. The model is that pages exist from the beginning of time (though initially with empty content) and we merely do not yet know its name, though that will be discoverable. The reality of the implementation is a little different, though official state changes still only happen on POST. (The changing of log files is a separate matter.)

But that still only makes the wiki semi-REST-ful. There's no creating pages/“resources” without an existing reference to them. (Theoretically we could add DELETE without much problem, but there's not really much point for us to bother; we aren't really interested in supporting REST because the wiki is focussed on supporting information for humans, not computers.)