Say we want to integrate multiple applications which handle order processing. OK, that's got to be one of the dullest starts to a blog post. Never mind, bear with me...

So, we have applications on separate servers for handling and driving data such as orders, product descriptions and catalogues, stock lists, price lists, tracking, packing notes and delivery notes, invoices, payments, etc.

We may choose an SOA approach, of course. But let's say our sponsors have heard of this cheaper alternative: REST! Which to them means 'using Web technology to save money'.

Now .. suppose we push the time slider right back to before Mark Baker and the SOA -vs- REST Wars - or the 'SOAP -vs- REST Wars' as people naively called it. To when REST was simply (!) a description of the Web's architectural style...

What if we revisit the applicability of the Web, and its abstraction into REST, to the architecture of machine-to-machine distributed systems - to something like our order processing integration?

I think we'd quickly arrive at something that looks more like FOREST than, say, AtomPub...

Some pretty obvious things to notice about the Web and, indeed, REST:

  • The Web is essentially data on URLs of standard content types containing more URLs;
  • The Web is the Web because of the massive proliferation of links in that data;
  • REST mostly concerns itself with the consequences of GET, including cacheing;
  • The Web uses, I don't know, let's say 98% GET, 2% POST, around 0% other methods.

In other words, the Web, and its good qualities, are mostly based on:

GET URL -> HTML -> a.href=URL -> GET URL ..

When applying this Web/REST architectural style to our integration scenario, there are things that we can say right now with certainty will be different, but will have corresponding elements:

  • It's about data not documents, so HTML is probably going to be replaced by XML, although perhaps XHTML+Microformats or Atom would make a good compromise;
  • We have a choice of link specs: xhtml:a.href, atom:link.rel, xml:xlink; I don't think we'll be using XLink since no-one else seems to;
  • We'll probably use machine-generated URLs perhaps containing UUIDs, GUIDs or whatever.

In other words, we're not going to be spinning a hypermedia Web - it's more a 'hyperdata' Web.

So, in order to emulate the document Web in our hyperdata integration Web, we'll mostly be doing something like:

GET ID-URL -> XHTML -> a.href=ID-URL -> GET ID-URL ..


Oh! I've got some slides of all this on Google Docs: we're up to Slide 2! Maybe right-click, open in a new window...


Symmetry - Slide 3

But by far the biggest difference between the Web and an integration scenario is that the asymmetry on the network goes away; even for a cross-enterprise integration.

Where the Web's browser clients and site servers have always been asymmetric - clients being hidden away and only able to establish outbound connections - machine-to-machine integration is fundamentally symmetric - all servers can be made visible to each other.

Now, in order to keep the well-studied benefits described in REST, including separation of concerns, we should aim to maintain the client-server, layered structure in the use of the protocol.

But clients can be servers and vice-versa!

So, in machine-to-machine integration scenarios, we have:

  • Two-way GETs on machine-minted URLs pointing at XHTML+Microformats or Atom content containing more links.

In other words:

  • A hyperdata Web both created and consumed by the applications being integrated.

All of the dynamic data or hyperdata items in our order processing scenario will be distributed across the many applications being integrated. Each application serves its part of the hyperdata Web to the others.

And, of course, the hyperdata joins all these applications up: a payment resource in the accounting application will point to an order resource in the order processing application, etc.


Interactions and Application State

Now, each application has its own set of business rules and constraints over the hyperdata parts that it governs.

So how exactly should the applications publishing those bits of the hyperdata Web interact? How do orders interact with packing notes and stock levels, with payments and accounts?

In the Web, you go to a site and jump some links. Each page brings in CSS, Javascript, images, maybe iframes: an eager assembly of pages from links to many resources, in contrast to the lazy, on-demand fetching of links from a user jumping them.

The browser at any time has a state that depends on the page and images, etc, currently being viewed, plus the history of previous pages, bookmarks, etc.

Search engines in the Web, without a user driving things, are eager to traverse links in order to do their work indexing pages. Order processing applications will probably have more in common with search engines than with browsers.

REST describes this in terms of hypermedia - links - driving 'application state'.

So we next need to decide what 'application state' is, in our Web- and REST-driven architecture for machine-to-machine distributed systems; where the user driving hypermedia link traversals is replaced by business rules or logic driving hyperdata link traversals.

Each integrated application has its own 'application state', so, to follow REST, this application state should be driven by the surrounding hyperdata of peer applications, according to those business rules.


Application State is Linked Resources - Slide 4

In fact - and this is a consequence of the symmetry of integration - 'application state' is those very resources that the application contributes to the hyperdata Web!

A stock tracking application's state is pretty well described by a bunch of resources describing the stock levels. A fulfilment application's state could be inferred by inspecting the outstanding packing notes.

We're not limited to the asymmetric browser-server of the Web, where the browser's 'application state' is never visible except when it POSTs something back.

It's more like a search engine, where you can publically access an 'application state' that is entirely driven by the hypermedia crawled by the search bot. A search engine's application state is rendered into the results page resources you see when you do a search.

So the resources of each application in the order processing integration are driven by the surrounding, linked resources of the other applications.

You could rephrase REST's 'hypermedia as the engine of application state' when applying it to symmetric machine-to-machine integration in this neat way:

Hyperdata as the Engine of Hyperdata.


The Functional Observer Programming Model - Slide 5

So now, how do we program the hyperdata-driven-hyperdata of our integrated applications?

How do we animate the stock tracking hyperdata chunk over here in the face of today's packing notes in their hyperdata chunk over there?

That's what FOREST is all about!

The name 'FOREST' stands for 'Functional Observer REST'.

The words 'Functional Observer' describe FOREST's hyperdata-driven-hyperdata programming model. But it's much simpler than it sounds...

A FOREST resource in the hyperdata Web sets its next state as a Function of its current state plus the state of those other resources Observed by it via its links.

The best way to encode such state evolution is in rewrite rules or functions which match a resource and its linked resources on the left-hand side, then rewrite that resource's state on the right-hand side.


Not Like AtomPub, then

So, quite a different conclusion from what is now the 'conventional REST' of the four verbs - GET, POST, PUT and DELETE.

Quite different from asymmetric, one-way application protocols as modelled by AtomPub, in which clients aren't considered worthy to hold their own resources, but are allowed only an inscrutable 'application state'.

By focusing on GET and the freedom in integration to be symmetric, we've arrived at a general distributed programming model, FOREST, that allows us to express business rules that drive an application's hyperdata in the context of another application's hyperdata.

Watch this blog (and Twitter), where I'll be talking more about the benefits of FOREST, its implementation, and, above all, offering examples of how it would work (once the code is ready enough!).