Since the day in 2006 that our dialogue took place with an imaginary eBay Architect, he has been promoted to imaginary Enterprise Architect in an investment bank! Convinced by the merits of REST, he took his enthusiasm for it into his new job and embarked on architecting a trading system using REST or ROA as an alternative to SOA.
Now, he hit upon a snag: he had a REST "bank server" generating bids on an instrument and POSTing them into that instrument's REST "market server". But then he had two copies of his bid! One held by the bank server on one URI, and the other in a "bid collection" held by the market server's instrument - on another URI.
He asked himself: "Which URI is the real one? Which host 'owns' the bid? Is the market's copy just a cache? If so, why does it have a new URI? Why doesn't the market host know the URI of the bank's original bid? Why can't servers become clients and just GET the data that their own data depends upon?" The server seemed to be dominating the conversation, not letting its 'client' server have a say in things.
Our worried Enterprise Architect noticed that such Service-Orientation permeated REST practice: there were "REST APIs" to Web sites, or "Web services" with a small 's'. Even AtomPub had a "service document"! Some patterns, like AtomPub, offered just simple read/write data services through the full HTTP method set. Some simply used such a read/write interface as a wrapper around more complex service functions.
He wondered: "Where's the Web in REST integration? The Web works great without PUT and DELETE: isn't using GET on its own RESTful enough?"
So, remembering something I said about "Symmetric REST", he contacted me again...
Enterprise Architect: I see we made it into Appendix A of the REST book by Richardson and Ruby!
Duncan Cragg: Indeed - even though I hadn't finished writing up our chat when it was published...
EA: So why did it take you so long to write it up?
But I'm back now, intending to focus more on ROA's advantages over SOA.
EA: Great! Because I wanted to talk to you about that.
Where I now work, we are looking at REST or ROA as an alternative to SOA. However, all the available REST patterns still seem to see the world through Service-Oriented eyes.
I want to do REST like the Web does: to have different servers just publishing stuff that's all linked up. And "mashed up": to have that stuff, that data, "over here" depend on that data "over there": meaning that servers can be clients and vice-versa.
DC: Hyperdata that depends on someone else's hyperdata! Maybe rewrite rules over interlinked XHTML.
I called it "REST Observer" back then, but recent events on the rest-discuss mailing list have left me very wary of using the word 'REST' so openly in the name of something!
So I decided to hide it within a different word: 'FOREST'!
Here is a posting about FOREST that I recently made to the rest-discuss mailing list:
FOREST is a GET-only REST Integration Pattern defined simply as:
A resource's state depends on the state of other resources that it links to.
This means that resource servers must also be clients in order to see those dependencies.
Common Web Pattern
FOREST is a REST Pattern derived from GET-only or polling Web use-cases, including mashups:
- feed aggregators or filters
- search index results pages
- pages that depend on a search
- Google's mobile versions of pages
- sites that create summaries of other Web pages
- sites that create feeds from Web pages
- creating pages or feeds from REST 'APIs' (GET only)
- Yahoo Pipes
FOREST is a REST Pattern for building "Enterprise Mashups" in an ROA / WOA / SOA.
OK - those of you without Dion Hinchcliffe in your feed reader may be feeling a little queasy at this point, but I'd encourage you to read on ... Actually, I quite like the phrase "Enterprise Mashup" since it lightens the gravity of that 'Enterprise' word.
Enterprise Mashup Markup Language is the nearest thing to this that I know about, but FOREST is quite different: it is much simpler and is /only/ a REST Pattern.
Patterns can be implemented in frameworks...
A FOREST implementation would inevitably be over HTTP. It would initially be just XHTML or Atom. I imagine fetching XHTML resources within which are expected to be links to more such documents. Any XHTML could depend on any other, and they're all interlinked. If you depend on another resource, you must have found it directly or indirectly through links in your body. Alternative discovery: a resource could be told that it is being watched using an HTTP header in the GET request listing the URIs of the resources that depend on it - then it could watch and link back. Etag would be used for an automatically incremented version number.
Rough Consensus and Working Code
I would ideally see this work towards a formal description via "rough consensus and working code". I intend to knock up a prototype of FOREST in a Jetty servlet and post it to GitHub; if that code works, I may get rough consensus...
What a FOREST XHTML/HTTP formalisation would specify: Updated
- use of HTTP headers (Etag, Cache-Control, Content-Location, Referer*)
- API*: doc builder, XPath body set/get*, callbacks (observed, notified*)
- 'Referer' is a possible header for the URIs of dependent resources
- the API would be language-independent, but probably Java-like
- the XPath 'get' would be extended to jump links from doc to doc
- every doc jumped to gets observed
- 'notified' means being told when the GET returns with the observed state
What a FOREST Java servlet and client library would implement 'under' these specs:
- a driver module loader: drivers animate resources through the API
- a document cache - in memory and maybe saved to disk or database
Resource animation would either be by the application of business rules driving the API, or by adapting between external state and the API.
EA: Wow! That's amazing! Can I help build it?
DC: Of course you can. Know any Java?