Introducing XIO (xio.js)

by Jon Davis 3. September 2013 02:36

I spent the latter portion last week and the bulk of the holiday fleshing out the initial prototype of XIO ("ecks-eye-oh" or "zee-oh", I don't care at this point). It was intended to start out as an I/O library targeting everything (get it? X I/O, as in I/O for x), but that in turn forced me to make it a repository library with RESTful semantics. I still want to add stream-oriented functionality (WebSocket / long polling) to it to make it truly an I/O library. In the mean time, I hope people can find it useful as a consolidated interface library for storing and retrieving data.

You can access this project here: https://github.com/stimpy77/xio.js#readme

Here's a snapshot of the README file as it was at the time of this blog entry.



XIO (xio.js)

version 0.1.1 initial prototype (all 36-or-so tests pass)

A consistent data repository strategy for local and remote resources.

What it does

xio.js is a Javascript resource that supports reading and writing data to/from local data stores and remote servers using a consistent interface convention. One can write code that can be more easily migrated between storage locations and/or URIs, and repository operations are simplified into a simple set of verbs.

To write and read to and from local storage,

xio.set.local("mykey", "myvalue");
var value = xio.get.local("mykey")();

To write and read to and from a session cookie,

xio.set.cookie("mykey", "myvalue");
var value = xio.get.cookie("mykey")();

To write and read to and from a web service (as optionally synchronous; see below),

xio.post.mywebservice("mykey", "myvalue");
var value = xio.get.mywebservice("mykey")();

See the pattern? It supports localStorage, sessionStorage, cookies, and RESTful AJAX calls, using the same interface and conventions.

It also supports generating XHR functions and providing implementations that look like:

mywebservice.post("mykey", "myvalue");
var value = mywebservice.get("mykey")(); // assumes synchronous; see below
Optionally synchronous (asynchronous by default)

Whether you're working with localStorage or an XHR resource, each operation returns a promise.

When the action is synchronous, such as in working with localStorage, it returns a "synchronous promise" which is essentially a function that can optionally be immediately invoked and it will wrap .success(value) and return the value. This also works with XHR when async: false is passed in with the options during setup (define(..)).

The examples below are the same, only because XIO knows that the localStorage implementation of get is synchronous.

Aynchronous convention: var val; xio.get.local('mykey').success(function(v) { val = v; });

Synchronous convention: var val = xio.get.local('mykey')();

Generated operation interfaces

Whenever a new repository is defined using XIO, a set of supported verb and their implemented functions is returned and can be used as a repository object. For example:

var myRepository = xio.define('myRepository', { 
    url: '/myRepository?key={0}',
    methods: ["GET", "POST", "PUT", "DELETE"]
});

.. would populate the variable myRepository with:

{
    get: function(key) { /* .. */ },
    post: function(key, value) { /* .. */ },
    put: function(key, value) { /* .. */ },
    delete: function(key) { /* .. */ }
}

.. and each of these would return a promise.

XIO's alternative convention

But the built-in convention is a bit unique using xio[action][repository](key, value) (i.e.xio.post.myRepository("mykey", {first: "Bob", last: "Bison"}), which, again, returns a promise.

This syntactical convention, with the verb preceding the repository, is different from the usual convention of_object.method(key, value).

Why?!

The primary reason was to be able to isolate the repository from the operation, so that one could theoretically swap out one repository for another with minimal or no changes to CRUD code. For example,

var repository = "local"; // use localStorage for now; 
                          // replace with "my_restful_service" when ready 
                          // to integrate with the server
xio.post[repository](key, value).complete(function() {

    xio.get[repository](key).success(function(val) {
        console.log(val);
    });

});

Note here how "repository" is something that can move around. The goal, therefore, is to make disparate repositories such as localStorage and RESTful web service targets support the same features using the same interface.

As a bit of an experiment, this convention of xio[verb][repository] also seems to read and write a little better, even if it's a bit weird at first to see. The thinking is similar to the verb-target convention in PowerShell. Rather than taking a repository and working with it independently with assertions that it will have some CRUD operations available, the perspective is flipped and you are focusing on what you need to do, the verbs, first, while the target becomes more like a parameter or a known implementation of that operation. The goal is to dumb down CRUD operation concepts and repositories and refocus on the operations themselves so that, rather than repositories having an unknown set of operations with unknown interface styles and other features, instead, your standard CRUD operations, which are predictable, have a set of valid repository targets that support those operations.

This approach would have been entirely unnecessary and pointless if Javascript inherently supported interfaces, because then we could just define a CRUD interface and write all our repositories against those CRUD operations. But it doesn't, and indeed with the convention of closures and modules, it really can't.

Meanwhile, when you define a repository with xio.define(), as was described above and detailed again below, it returns an object that contains the operations (get(), post(), etc) that it supports. So if you really want to use the conventional repository[method](key, value) approach, you still can!

Download

Download here: https://raw.github.com/stimpy77/xio.js/master/src/xio.js

To use the whole package (by cloning this repository)

.. and to run the Jasmine tests, you will need Visual Studio 2012 and a registration of the .json file type with IIS / IIS Express MIME types. Open the xio.js.csproj file.

Dependencies

jQuery is required for now, for XHR-based operations, so it's not quite ready for node.js. This dependency requirement might be dropped in the future.

Basic verbs

See xio.verbs:

  • get(key)
  • set(key, value); used only by localStorage, sessionStorage, and cookie
  • put(key, data); defaults to "set" behavior when using localStorage, sessionStorage, or cookie
  • post(key, data); defaults to "set" behavior when using localStorage, sessionStorage, or cookie
  • delete(key)
  • patch(key, patchdata); implemented based on JSON/Javascript literals field sets (send only deltas)
Examples
// initialize

var xio = Xio(); // initialize a module instance named "xio"
localStorage
xio.set.local("my_key", "my_value");
var val = xio.get.local("my_key")();
xio.delete.local("my_key");

// or, get using asynchronous conventions, ..    
var val;
xio.get.local("my_key").success(function(v) 
    val = v;
});

xio.set.local("my_key", {
    first: "Bob",
    last: "Jones"
}).complete(function() {
    xio.patch.local("my_key", {
        last: "Jonas" // keep first name
    });
});
sessionStorage
xio.set.session("my_key", "my_value");
var val = xio.get.session("my_key")();
xio.delete.session("my_key");
cookie
xio.set.cookie(...)

.. supports these arguments: (key, value, expires, path, domain)

Alternatively, retaining only the xio.set["cookie"](key, value), you can automatically returned helper replacer functions:

xio.set["cookie"](skey, svalue)
    .expires(Date.now() + 30 * 24 * 60 * 60000))
    .path("/")
    .domain("mysite.com");

Note that using this approach, while more expressive and potentially more convertible to other CRUD targets, also results in each helper function deleting the previous value to set the value with the new adjustment.

session cookie
xio.set.cookie("my_key", "my_value");
var val = xio.get.cookie("my_key")();
xio.delete.cookie("my_key");
persistent cookie
xio.set.cookie("my_key", "my_value", new Date(Date.now() + 30 * 24 * 60 * 60000));
var val = xio.get.cookie("my_key")();
xio.delete.cookie("my_key");
web server resource (basics)
var define_result =
    xio.define("basic_sample", {
                url: "my/url/{0}/{1}",
                methods: [ xio.verbs.get, xio.verbs.post, xio.verbs.put, xio.verbs.delete ],
                dataType: 'json',
                async: false
            });
var promise = xio.get.basic_sample([4,12]).success(function(result) {
   // ..
});
// alternatively ..
var promise_ = define_result.get([4,12]).success(function(result) {
   // ..
});

The define() function creates a verb handler or route.

The url property is an expression that is formatted with the key parameter of any XHR-based CRUD operation. The key parameter can be a string (or number) or an array of strings (or numbers, which are convertible to strings). This value will be applied to the url property using the same convention as the typical string formatters in other languages such as C#'s string.Format().

Where the methods property is defined as an array of "GET", "POST", etc, for each one mapping to standard XIO verbs an XHR route will be internally created on behalf of the rest of the options defined in the options object that is passed in as a parameter to define(). The return value of define() is an object that lists all of the various operations that were wrapped for XIO (i.e. get(), post(), etc).

The rest of the options are used, for now, as a jQuery's $.ajax(..., options) parameter. The async property defaults to false. When async is true, the returned promise is wrapped with a "synchronous promise", which you can optionally immediately invoke with parens (()) which will return the value that is normally passed into .success(function (value) { .. }.

In the above example, define_result is an object that looks like this:

{
    get: function(key) { /* .. */ },
    post: function(key, value) { /* .. */ },
    put: function(key, value) { /* .. */ },
    delete: function(key) { /* .. */ }
}

In fact,

define_result.get === xio.get.basic_sample

.. should evaluate to true.

Sample 2:

var ops = xio.define("basic_sample2", {
                get: function(key) { return "value"; },
                post: function(key,value) { return "ok"; }
            });
var promise = xio.get["basic_sample2"]("mykey").success(function(result) {
   // ..
});

In this example, the get() and post() operations are explicitly declared into the defined verb handler and wrapped with a promise, rather than internally wrapped into XHR/AJAX calls. If an explicit definition returns a promise (i.e. an object with .success and .complete), the returned promise will not be wrapped. You can mix-and-match both generated XHR calls (with the url and methods properties) as well as custom implementations (with explicit get/post/etc properties) in the options argument. Custom implementations will override any generated implementations if they conflict.

web server resource (asynchronous GET)
xio.define("specresource", {
                url: "spec/res/{0}",
                methods: [xio.verbs.get],
                dataType: 'json'
            });
var val;
xio.get.specresource("myResourceAction").success(function(v) { // gets http://host_server/spec/res/myResourceAction
    val = v;
}).complete(function() {
    // continue processing with populated val
});
web server resource (synchronous GET)
xio.define("synchronous_specresources", {
                url: "spec/res/{0}",
                methods: [xio.verbs.get],
                dataType: 'json',
                async: false // <<==!!!!!
            });
var val = xio.get.synchronous_specresources("myResourceAction")(); // gets http://host_server/spec/res/myResourceAction
web server resource POST
xio.define("contactsvc", {
                url: "svcapi/contact/{0}",
                methods: [ xio.verbs.get, xio.verbs.post ],
                dataType: 'json'
            });
var myModel = {
    first: "Fred",
    last: "Flinstone"
}
var val = xio.post.contactsvc(null, myModel).success(function(id) { // posts to http://host_server/svcapi/contact/
    // model has been posted, new ID returned
    // validate:
    xio.get.contactsvc(id).success(function(contact) {  // gets from http://host_server/svcapi/contact/{id}
        expect(contact.first).toBe("Fred");
    });
});
web server resource (DELETE)
xio.delete.myresourceContainer("myresource");
web server resource (PUT)
xio.define("contactsvc", {
                url: "svcapi/contact/{0}",
                methods: [ xio.verbs.get, xio.verbs.post, xio.verbs.put ],
                dataType: 'json'
            });
var myModel = {
    first: "Fred",
    last: "Flinstone"
}
var val = xio.post.contactsvc(null, myModel).success(function(id) { // posts to http://host_server/svcapi/contact/
    // model has been posted, new ID returned
    // now modify:
    myModel = {
        first: "Carl",
        last: "Zeuss"
    }
    xio.put.contactsvc(id, myModel).success(function() {  /* .. */ }).error(function() { /* .. */ });
});
web server resource (PATCH)
xio.define("contactsvc", {
                url: "svcapi/contact/{0}",
                methods: [ xio.verbs.get, xio.verbs.post, xio.verbs.patch ],
                dataType: 'json'
            });
var myModel = {
    first: "Fred",
    last: "Flinstone"
}
var val = xio.post.contactsvc(null, myModel).success(function(id) { // posts to http://host_server/svcapi/contact/
    // model has been posted, new ID returned
    // now modify:
    var myModification = {
        first: "Phil" // leave the last name intact
    }
    xio.patch.contactsvc(id, myModification).success(function() {  /* .. */ }).error(function() { /* .. */ });
});
custom implementation and redefinition
xio.define("custom1", {
    get: function(key) { return "teh value for " + key};
});
xio.get.custom1("tehkey").success(function(v) { alert(v); } ); // alerts "teh value for tehkey";
xio.redefine("custom1", xio.verbs.get, function(key) { return "teh better value for " + key; });
xio.get.custom1("tehkey").success(function(v) { alert(v); } ); // alerts "teh better value for tehkey"
var custom1 = 
    xio.redefine("custom1", {
        url: "customurl/{0}",
        methods: [xio.verbs.post],
        get: function(key) { return "custom getter still"; }
    });
xio.post.custom1("tehkey", "val"); // asynchronously posts to URL http://host_server/customurl/tehkey
xio.get.custom1("tehkey").success(function(v) { alert(v); } ); // alerts "custom getter still"

// oh by the way,
for (var p in custom1) {
    if (custom1.hasOwnProperty(p) && typeof(custom1[p]) == "function") {
        console.log("custom1." + p); // should emit custom1.get and custom1.post
    }
}

Future intentions

WebSockets and WebRTC support

The original motivation to produce an I/O library was actually to implement a WebSockets client that can fallback to long polling, and that has no dependency upon jQuery. Instead, what has so far become implemented has been a standard AJAX interface that depends upon jQuery. Go figure.

If and when WebSocket support gets added, the next step will be WebRTC.

Meanwhile, jQuery needs to be replaced with something that works fine on nodejs.

Additionally, in a completely isolated parallel path, if no progress is made by the ASP.NET SignalR team to make the SignalR client freed from jQuery, xio.js might become tailored to be a somewhat code compatible client implementation or a support library for a separate SignalR client implementation.

Service Bus, Queuing, and background tasks support

At an extremely lightweight scale, I do want to implement some service bus and queue features. For remote service integration, this would just be more verbs to sit on top of the existing CRUD operations, as well as WebSockets / long polling / SignalR integration. This is all fairly vague right now because I am not sure yet what it will look like. On a local level, however, I am considering integrating with Web Workers. It might be nice to use XIO to manage deferred I/O via the Web Workers feature. There are major limitations to Web Workers, however, such as no access to the DOM, so I am not sure yet.

Other notes

If you run the Jasmine tests, make sure the .json file type is set up as a mime type. For example, IIS and IIS Express will return a 403 otherwise. Google reveals this: http://michaellhayden.blogspot.com/2012/07/add-json-mime-type-to-iis-express.html

License

The license for XIO is pending, as it's not as important to me as getting some initial feedback. It will definitely be an attribution-based license. If you use xio.js as-is, unchanged, with the comments at top, you definitely may use it for any project. I will drop in a license (probably Apache 2 or BSD or Creative Commons Attribution or somesuch) in the near future.

A Consistent Approach To Client-Side Cache Invalidation

by Jon Davis 10. August 2013 17:40

Download the source code for this blog entry here: ClientSideCacheInvalidation.zip

TL;DR?

Please scroll down to the bottom of this article to review the summary.

I ran into a problem not long ago where some JSON results from an AJAX call to an ASP.NET MVC JsonResult action were being cached by the browser, quite intentionally by design, but were no longer up-to-date, and without devising a new approach to route manipulation or any of the other fundamental infrastructural designs for the endpoints (because there were too many) our hands were tied. The caching was being done using the ASP.NET OutputCacheAttribute on the action being invoked in the AJAX call, something like this (not really, but this briefly demonstrates caching):

[OutputCache(Duration = 300)]
public JsonResult GetData()
{
return Json(new
{
LastModified = DateTime.Now.ToString()
}, JsonRequestBehavior.AllowGet);
}
@model dynamic
@{
ViewBag.Title = "Home";
}
<h2>Home</h2>
<div id="results"></div>
<div><button id="reload">Reload</button></div>
@section scripts {
<script>
var $APPROOT = "@Url.Content("~/")";
$.getJSON($APPROOT + "Home/GetData", function (o) {
$('#results').text("Last modified: " + o.LastModified);
});
$('#reload').on('click', function() {
window.location.reload();
});
</script>
}

Since we were using a generalized approach to output caching (as we should), I knew that any solution to this problem should also be generalized. My first thought was in the mistaken assumption that the default [OutputCache] behavior was to rely on client-side caching, since client-side caching was what I was observing while using Fiddler. (Mind you, in the above sample this is not the case, it is actually server-side, but this is probably because of the amount of data being transferred. I’ll explain after I explain what I did in my false assumption.)

Microsoft’s default convention for implementing cache invalidation is to rely on “VaryBy..” semantics, such as varying the route parameters. That is great except that the route and parameters were currently not changing in our implementation.

So, my initial proposal was to force the caching to be done on the server instead of on the client, and to invalidate when appropriate.

 

public JsonResult DoSomething()
{
// 
// Do something here that has a side-effect
// of making the cached data stale
// 
Response.RemoveOutputCacheItem(Url.Action("GetData"));
return Json("OK");
}
[OutputCache(Duration = 300, Location = OutputCacheLocation.Server)]
public JsonResult GetData()
{
return Json(new
{
LastModified = DateTime.Now.ToString()
}, JsonRequestBehavior.AllowGet);
}

 

 

<button id="invalidate">Invalidate</button></div>

 

 

$('#invalidate').on('click', function() {
$.post($APPROOT + "Home/DoSomething", null, function(o) {
window.location.reload();
}, 'json');
});

 

image
While Reload has no effect on the Last modified value, the
Invalidate button causes the date to increment.

When testing, this actually worked quite well. But concerns were raised about the payload of memory on the server. Personally I think the memory payload in practically any server-side caching is negligible, certainly if it is small enough that it would be transmitted over the wire to a client, so long as it is measured in kilobytes or tens of kilobytes and not megabytes. I think the real concern is that transmission; the point of caching is to make the user experience as smooth and seamless as possible with minimal waiting, so if the user is waiting for a (cached) payload, while it may be much faster than the time taken to recalculate or re-acquire the data, it is still measurably slower than relying on browser cache.

The default implementation of OutputCacheAttribute is actually OutputCacheLocation.Any. This indicates that the cached item can be cached on the client, on a proxy server, or on the web server. From my tests, for tiny payloads, the behavior seemed to be caching on the server and no caching on the client; for a large payload from GET requests with querystring parameters seemed to be caching on the client but with an HTTP query with an “If-Modified-Since” header, resulting in a 304 Not Modified on the server (indicating it was also cached on the server but verified by the server that the client’s cache remains valid); and for a large payload from GET requests with all parameters in the path, the behavior seemed to be caching on the client without any validation checking from the client (no HTTP request for an If-Modified-Since check). Now, to be quite honest I am only guessing that these were the distinguishing factors of these behavior observations. Honestly, I saw variations of these behaviors happening all over the place as I tinkered with scenarios, and this was the initial pattern I felt I was observing.

At any rate, for our purposes we were currently stuck with relying on “Any” as the location, which in theory would remove server-side caching if the server ran short on RAM (in theory, I don’t know, although the truth can probably be researched, which I don’t have time to get into). The point of all this is, we have client-side caching that we cannot get away from.

So, how do you invalidate the client-side cache? Technically, you really can’t. The browser controls the cache bucket and no browsers provide hooks into the cache to invalidate them. But we can get smart about this, and work around the problem, by bypassing the cached data. Cached HTTP results are stored on the basis of varying by the full raw URL on HTTP GET methods, they are cached with an expiration (in the above sample’s case, 300 seconds, or 5 minutes), and are only cached if allowed to be cached in the first place as per the HTTP header directives in the HTTP response. So, to bypass the cache you don’t cache, or you need to know up front how long the cache should remain until it expires—neither of these being acceptable in a dynamic application—or you need to use POST instead of GET, or you need to vary up the URL.

Microsoft originally got around the caching problem in ASP.NET 1.x by forcing the “normal” development cycle in the lifecycle of <form> tags that always used the POST method over HTTP. Responses from POST requests are never cached. But POSTing is not clean as it does not follow the semantics of the verbiage if nothing is being sent up and data is only being retrieved.

You can also use ETag in the HTTP headers, which isn’t particularly helpful in a dynamic application as it is no different from a URL + expiration policy.

To summarize, to control cache:

  • Disable caching from the server in the Response header (Pragma: no-cache)
  • Predict the lifetime of the content and use an expiration policy
  • Use POST not GET
  • Etag
  • Vary the URL (case-sensitive)

Given our options, we need to vary up the URL. There a number of approaches to this, but almost all of the approaches involve relying on appending or modifying the querystring with parameters that are expected to be ignored by the server.

$.getJSON($APPROOT + "Home/GetData?_="+Date.now(), function (o) {
$('#results').text("Last modified: " + o.LastModified);
});

In this sample, the URL is appended with “?_=”+Date.now(), resulting in this URL in the GET:

/Home/GetData?_=1376170287015

This technique is often referred to as cache-busting. (And if you’re reading this blog article, you’re probably rolling your eyes. “Duh.”) jQuery inherently supports cache-busting, but it does not do it on its own from $.getJSON(), it only does it in $.ajax() when the options parameter includes {cache: false}, unless you invoke $.ajaxSetup({ cache: false }); first to disable all caching. Otherwise, for $.getJSON() you would have to do it manually by appending the URL. (Alright, you can stop rolling your eyes at me now, I’m just trying to be thorough here..)

This is not our complete solution. We have a couple problems we still have to solve.

First of all, in a complex client codebase, hacking at the URL from application logic might not be the most appropriate approach. Consider if you’re using Backbone.js with routes that synchronize objects to and from the server. It would be inappropriate to modify the routes themselves just for cache invalidation. A more generalized cache invalidation technique needs to be implemented in the XHR-invoking AJAX function itself. The approach in doing this will depend upon your Javascript libraries you are using, but, for example, if jQuery.getJSON() is being used in application code, then jQuery.getJSON itself could perhaps be replaced with an invalidation routine.

var gj = $.getJSON;
$.getJSON = function (url, data, callback) {
url = invalidateCacheIfAppropriate(url); // todo: implement something like this
return gj.call(this, url, data, callback);
};

This is unconventional and probably a bad example since you’re hacking at a third party library, a better approach might be to wrap the invocation of $.getJSON() with an application function.

var getJSONWrapper = function (url, data, callback) {
url = invalidateCacheIfAppropriate(url); // todo: implement something like this
return $.getJSON(url, data, callback);
};

And from this point on, instead of invoking $.getJSON() in application code, you would invoke getJSONWrapper, in this example.

The second problem we still need to solve is that the invalidation of cached data that derived from the server needs to be triggered by the server because it is the server, not the client, that knows that client cached data is no longer up-to-date. Depending on the application, the client logic might just know by keeping track of what server endpoints it is touching, but it might not! Besides, a server endpoint might have conditional invalidation triggers; the data might be stale given specific conditions that only the server may know and perhaps only upon some calculation. In other words, invalidation needs to be pushed by the server.

One brute force, burdensome, and perhaps a little crazy approach to this might be to use actual “push technology”, formerly “Comet” or “long-polling”, now WebSockets, implemented perhaps with ASP.NET SignalR, where a connection is maintained between the client and the server and the server then has this open socket that can push invalidation flags to the client.

We had no need for that level of integration and you probably don’t either, I just wanted to mention it because it might come back as food for thought for a related solution. One scenario I suppose where this might be useful is if another user of the web application has caused the invalidation, in which case the current user will not be in the request/response cycle to acquire the invalidation flag. Otherwise, it is perhaps a reasonable assumption that invalidation is only needed, and only triggered, in the context of a user’s own session. If not, perhaps it is a “good enough” assumption even if it is sometimes not true. The expiration policy can be set low enough that a reasonable compromise can be made between the current user’s changes and changes invoked by other systems or other users.

While we may not know what server endpoint might introduce the invalidation of client cache data, we could assume that the invalidation will be triggered by any server endpoint(s), and build invalidation trigger logic on the response of server HTTP responses.

To begin implementing some sort of invalidation trigger on the server I could flag invalidations to the client using HTTP header(s).

public JsonResult DoSomething()
{
//
// Do something here that has a side-effect
// of making the cached data stale
//
InvalidateCacheItem(Url.Action("GetData"));
return Json("OK");
}
public void InvalidateCacheItem(string url)
{
Response.RemoveOutputCacheItem(url); // invalidate on server
Response.AddHeader("X-Invalidate-Cache-Item", url); // invalidate on client
}
[OutputCache(Duration = 300)]
public JsonResult GetData()
{
return Json(new
{
LastModified = DateTime.Now.ToString()
}, JsonRequestBehavior.AllowGet);
}

At this point, the server is emitting a trigger to the HTTP client that says that “as a result of a recent operation, that other URL, the one for GetData, is no longer valid for your current cache, if you have one”. The header alone can be handled by different client implementations (or proxies) in different ways. I didn’t come across any “standard” HTTP response that does this “officially”, so I’ll come up with a convention here.

image

Now we need to handle this on the client.

Before I do anything first of all I need to refactor the existing AJAX functionality on the client so that instead of using $.getJSON, I might use $.ajax or some other flexible XHR handler, and wrap it all in custom functions such as httpGET()/httpPOST() and handleResponse().

var httpGET = function(url, data, callback) {
return httpAction(url, data, callback, "GET");
};
var httpPOST = function (url, data, callback) {
return httpAction(url, data, callback, "POST");
};
var httpAction = function(url, data, callback, method) {
url = cachebust(url);
if (typeof(data) === "function") {
callback = data;
data = null;
}
$.ajax(url, {
data: data,
type: "GET",
success: function(responsedata, status, xhr) {
handleResponse(responsedata, status, xhr, callback);
}
});
};
var handleResponse = function (data, status, xhr, callback) {
handleInvalidationFlags(xhr);
callback.call(this, data, status, xhr);
};
function handleInvalidationFlags(xhr) {
// not yet implemented
};
function cachebust(url) {
// not yet implemented
return url;
};
// application logic
httpGET($APPROOT + "Home/GetData", function(o) {
$('#results').text("Last modified: " + o.LastModified);
});
$('#reload').on('click', function() {
window.location.reload();
});
$('#invalidate').on('click', function() {
httpPOST($APPROOT + "Home/Invalidate", function (o) {
window.location.reload();
});
});

At this point we’re not doing anything yet, we’ve just broken up the HTTP/XHR functionality into wrapper functions that we can now modify to manipulate the request and to deal with the invalidation flag in the response. Now all our work will be in handleInvalidationFlags() for capturing that new header we just emitted from the server, and cachebust() for hijacking the URLs of future requests.

To deal with the invalidation flag in the response, we need to detect that the header is there, and add the cached item to a cached data set that can be stored locally in the browser with web storage. The best place to put this cached data set is in sessionStorage, which is supported by all current browsers. Putting it in a session cookie (a cookie with no expiration flag) works but is less ideal because it adds to the payload of all HTTP requests. Putting it in localStorage is less ideal because we do want the invalidation flag(s) to go away when the browser session ends, because that’s when the original browser cache will expire anyway. There is one caveat to sessionStorage: if a user opens a new tab or window, the browser will drop the sessionStorage in that new tab or window, but may reuse the browser cache. The only workaround I know of at the moment is to use localStorage (permanently retaining the invalidation flags) or a session cookie. In our case, we used a session cookie.

Note also that IIS is case-insensitive on URI paths, but HTTP itself is not, and therefore browser caches will not be. We will need to ignore case when matching URLs with cache invalidation flags.

Here is a more or less complete client-side implementation that seems to work in my initial test for this blog entry.

function handleInvalidationFlags(xhr) {
// capture HTTP header
var invalidatedItemsHeader = xhr.getResponseHeader("X-Invalidate-Cache-Item");
if (!invalidatedItemsHeader) return;
invalidatedItemsHeader = invalidatedItemsHeader.split(';');
// get invalidation flags from session storage
var invalidatedItems = sessionStorage.getItem("invalidated-cache-items");
invalidatedItems = invalidatedItems ? JSON.parse(invalidatedItems) : {};
// update invalidation flags data set
for (var i in invalidatedItemsHeader) {
invalidatedItems[prepurl(invalidatedItemsHeader[i])] = Date.now();
}
// store revised invalidation flags data set back into session storage
sessionStorage.setItem("invalidated-cache-items", JSON.stringify(invalidatedItems));
}
// since we're using IIS/ASP.NET which ignores case on the path, we need a function to force lower-case on the path
function prepurl(u) {
return u.split('?')[0].toLowerCase() + (u.indexOf("?") > -1 ? "?" + u.split('?')[1] : "");
}
function cachebust(url) {
// get invalidation flags from session storage
var invalidatedItems = sessionStorage.getItem("invalidated-cache-items");
invalidatedItems = invalidatedItems ? JSON.parse(invalidatedItems) : {};
// if item match, return concatonated URL
var invalidated = invalidatedItems[prepurl(url)];
if (invalidated) {
return url + (url.indexOf("?") > -1 ? "&" : "?") + "_nocache=" + invalidated;
}
// no match; return unmodified
return url;
}

Note that the date/time value of when the invalidation occurred is permanently stored as the concatenation value. This allows the data to remain cached, just updated to that point in time. If invalidation occurs again, that concatenation value is revised to the new date/time.

Running this now, after invalidation is triggered by the server, the subsequent request of data is appended with a cache-buster querystring field.

image

 

In Summary, ..

.. a consistent approach to client-side cache invalidation triggered by the server might be by following these steps.

  1. Use X-Invalidate-Cache-Item as an HTTP response header to flag potentially cached URLs as expired. You might consider using a semicolon-delimited response to list multiple items. (Do not URI-encode the semicolon when using it as a URI list delimiter.) Semicolon is a reserved/invalid character in URI and is a valid delimiter in HTTP headers, so this is valid.
  2. Someday, browsers might support this HTTP response header by automatically invalidating browser cache items declared in this header, which would be awesome. In the mean time ...
  3. Capture these flags on the client into a data set, and store the data set into session storage in the format:
    		{
    	"http://url.com/route/action": (date_value_of_invalidation_flag),
    	"http://url.com/route/action/2": (date_value_of_invalidation_flag)
    	}
    	
  4. Hijack all XHR requests so that the URL is appropriately appended with cachebusting querystring parameter if the URL was found in the invalidation flags data set, i.e. http://url.com/route/action becomes something like http://url.com/route/action?_nocache=(date_value_of_invalidation_flag), being sure to hijack only the XHR request and not any logic that generated the URL in the first place.
  5. Remember that IIS and ASP.NET by default convention ignore case (“/Route/Action” == “/route/action”) on the path, but the HTTP specification does not and therefore the browser cache bucket will not ignore case. Force all URL checks for invalidation flags to be case-insensitive to the left of the querystring (if there is a querystring, otherwise for the entire URL).
  6. Make sure the AJAX requests’ querystring parameters are in consistent order. Changing the sequential order of parameters may be handled the same on the server but will be cached differently on the client.
  7. These steps are for “pull”-based XHR-driven invalidation flags being pulled from the server via XHR. For “push”-based invalidation triggered by the server, consider using something like a SignalR channel or hub to maintain an open channel of communication using WebSockets or long polling. Server application logic can then invoke this channel or hub to send an invalidation flag to the client or to all clients.
  8. On the client side, an invalidation flag “push” triggered in #7 above, for which #1 and #2 above would no longer apply, can still utilize #3 through #6.

You can download the project I used for this blog entry here: ClientSideCacheInvalidation.zip

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

ASP.NET | C# | Javascript | Techniques | Web Development

Canvas & HTML 5 Sample Junk

by Jon Davis 27. September 2012 15:48

Poking around with HTML 5 canvas again, refreshing my knowledge of the basics. Here's where I'm dumping links to my own tinkerings for my own reference. I'll update this with more list items later as I come up with them.

  1. Don't have a seizure. http://jsfiddle.net/8RYtu/22/
    HTML5 canvas arc, line, audio, custom web font rendered in canvas, non-fixed (dynamic) render loop with fps meter, window-scale, being obnoxious
  2. Pass-through pointer events http://jsfiddle.net/MtGT8/1/
    Demonstrates how the canvas element, which would normally intercept mouse events, does not do so here, and instead allows the mouse event to propagate to the elements behind it. Huge potential but does not work in Internet Explorer.
  3. Geolocation sample. http://jsfiddle.net/nmu3x/4/ 
    Nothing to do with canvas here. Get over it.
  4. ECMAScript 5 Javascript property getter/setter. http://jsfiddle.net/9QpnW/8/
    Like C#, Javascript now supports assigning functions to property getters/setters. See how I store a value privately (in a closure) and do bad by returning a modified value.

Automatically Declaring Namespaces in Javascript (namespaces.js)

by Jon Davis 25. September 2012 18:24

Namespaces in Javascript are a pattern many untrained or undisciplined developers may fail to do, but they are an essential strategy in retaining maintainability and avoiding collisions in Javascript source.

Part of the problem with namespaces is that if you have a complex client-side solution with several Javascript objects scattered across several files but they all pertain to the same overall solution, you may end up with very long, nested namespaces like this:

var ad = AcmeCorporation.Foo.Bar.WidgetFactory.createWidget('advertisement');

I personally am not opposed to long namespaces, so long as they can be shortened with aliases when their length gets in the way.

var wf = AcmeCorporation.Foo.Bar.WidgetFactory;
var ad = wf.createWidget('advertisement');

The problem I have run into, however, is that when I have multiple .js files in my project and I am not 100% sure of their load order, I may run into errors. For example:

// acme.WidgetFactory.js
AcmeCorporation.Foo.Bar.WidgetFactory = {
createWidget: function(e) {
return new otherProvider.Widget(e);
}
};

This may throw an error immediately because even though I’m declaring the WidgetFactory namespace, I am not certain that these namespaces have been defined:

  • AcmeCorporation
  • AcmeCorporation.Foo
  • AcmeCorporation.Foo.Bar

So again if any of those are missing, the code in my acme.WidgetFactory.js file will fail.

So then I clutter it with code that looks like this:

// acme.WidgetFactory.js
if (!window['AcmeCorporation']) window['AcmeCorporation'] = {};
if (!AcmeCorporation.Foo) AcmeCorporation.Foo = {};
if (!AcmeCorporation.Foo.Bar) AcmeCorporation.Foo.Bar = {};
AcmeCorporation.Foo.Bar.WidgetFactory = {
createWidget: function(e) {
return new otherProvider.Widget(e);
}
};

This is frankly not very clean. It adds a lot of overhead to my productivity just to get started writing code.

So today, to compliment my using.js solution (which dynamically loads scripts), I have cobbled together a very simple script that dynamically defines a namespace in a single line of code:

// acme.WidgetFactory.js
namespace('AcmeCorporation.Foo.Bar');
AcmeCorporation.Foo.Bar.WidgetFactory = {
createWidget : function(e) {
return new otherProvider.Widget(e);
}
};
/* or, alternatively ..
namespace('AcmeCorporation.Foo.Bar.WidgetFactory');
AcmeCorporation.Foo.Bar.WidgetFactory.createWidget = function(e) {
return new otherProvider.Widget(e);
};
*/

As you can see, a function called “namespace” splits the dot-notation and creates the nested objects on the global namespace to allow for the nested namespace to resolve correctly.

Note that this will not overwrite or clobber an existing namespace, it will only ensure that the namespace exists.

a = {};
a.b = {};
a.b.c = 'dog';
namespace('a.b.c');
alert(a.b.c); // alerts with "dog"

Where you will still need to be careful is if you are not sure of load order then your namespace names all the way up the dot-notation tree should be namespaces alone and never be defined objects, or else assigning the defined objects manually may clobber nested namespaces and nested objects.

namespace('a.b.c');
a.b.c.d = 'dog';
a.b.c.e = 'bird';
// in another script ..
a.b = { 
c : {
d : 'cat'
}
};
// in consuming script / page
alert(a.b.c); // alerts [object]
alert(a.b.c.d); // alerts 'cat'
alert(a.b.c.e); // alerts 'undefined'

Here’s the download if you want it as a script file [EDIT: the linked resource has since been modified and has grown significantly], and here is its [original] content:

function namespace(ns) {
var g = function(){return this}();
ns = ns.split('.');
for(var i=0, n=ns.length; i<n; ++i) {
var x = ns[i];
if (x in g === false) g[x]={}; 
g = g[x];
}
} 

The above is actually written by commenter "steve" (sjakubowsi -AT- hotmail -dot-com). Here is the original solution that I had come up with:

namespace = function(n) {
var s = n.split('.');
var exp = 'var ___v=undefined;try {___v=x} catch(e) {} if (___v===undefined)x={}';
var e = exp.replace(/x/g, s[0]);
eval(e);
for (var i=1; i<s.length; i++) {
var ns = '';
for (var p=0; p<=i; p++) {
if (ns.length > 0) ns += '.';
ns += s[p];
}
e = exp.replace(/x/g, ns);
eval(e);
}
}

Currently rated 4.0 by 2 people

  • Currently 4/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , ,

Javascript | Pet Projects

Why jQuery Plugins Use String-Referenced Function Invocations

by Jon Davis 25. September 2012 10:52

Some time ago (years ago) I cobbled together a jQuery plug-in or two that at the time I was pretty proud of, but in retrospect I’m pretty embarrassed. One of these plugins was jqDialogForms. The embarrassment was not due to its styling—the point of it was that it could be skinnable, I just didn’t have time to create sample skins—nor was the embarrassment due to the functional conflict with jQuery UI’s dialog component, because I had a specific vision in mind which included modeless parent/child ownership, opening by simple DOM reference or by string, and automatic form serialization to JSON. Were I to do all this again I would probably just extend jQuery UI with syntactical sugar, and move form serialization to another plugin, but all that is a tangent from the purpose of this blog post. My embarassment with jqDialogForms is with the patterns and conventions I chose in contradiction to jQuery’s unique patterns.

Since then I have abandoned (or perhaps neglected) jQuery plugins development, but I still formed casual and sometimes uneducated opinions along the way. One of the patterns that had irked me was jQuery UI’s pattern of how its components’ functions are invoked:

$('#mydiv').accordion( 'disable' );
$('#myauto').autocomplete( 'search' , [value] );
$('#prog').progressbar( 'value' , [value] );

Notice that the actual functions being invoked are identified with a string parameter into another function. I didn’t like this, and I still think it’s ugly. This came across to me as “the jQuery UI” way, and I believed that this contradicted “the jQuery way”, so for years I have been baffled as to how jQuery could have adopted jQuery UI as part of its official suite.

Then recently I came across this, and was baffled even more:

http://docs.jquery.com/Plugins/Authoring

Under no circumstance should a single plugin ever claim more than one namespace in the jQuery.fn object.

(function( $ ){

  $.fn.tooltip = function( options ) { 
    // THIS
  };
  $.fn.tooltipShow = function( ) {
    // IS
  };
  $.fn.tooltipHide = function( ) { 
    // BAD
  };
  $.fn.tooltipUpdate = function( content ) { 
    // !!!  
  };

})( jQuery );

This is a discouraged because it clutters up the $.fn namespace. To remedy this, you should collect all of your plugin’s methods in an object literal and call them by passing the string name of the method to the plugin.

(function( $ ){

  var methods = {
    init : function( options ) { 
      // THIS 
    },
    show : function( ) {
      // IS
    },
    hide : function( ) { 
      // GOOD
    },
    update : function( content ) { 
      // !!! 
    }
  };

  $.fn.tooltip = function( method ) {
    
    // Method calling logic
    if ( methods[method] ) {
      return methods[ method ].apply( this, Array.prototype.slice.call( arguments, 1 ));
    } else if ( typeof method === 'object' || ! method ) {
      return methods.init.apply( this, arguments );
    } else {
      $.error( 'Method ' +  method + ' does not exist on jQuery.tooltip' );
    }    
  
  };

})( jQuery );

// calls the init method
$('div').tooltip(); 

// calls the init method
$('div').tooltip({
  foo : 'bar'
});

// calls the hide method
$('div').tooltip('hide'); 
// calls the update method
$('div').tooltip('update', 'This is the new tooltip content!'); 

This type of plugin architecture allows you to encapsulate all of your methods in the plugin's parent closure, and call them by first passing the string name of the method, and then passing any additional parameters you might need for that method. This type of method encapsulation and architecture is a standard in the jQuery plugin community and it used by countless plugins, including the plugins and widgets in jQueryUI.

What baffled me was not their initial reasoning pertaining to namespaces. I completely understand the need to keep plugins’ namespaces in their own bucket. What baffled me was how this was considered a solution. Why not simply use this?

$('#mythingamajig').mySpecialNamespace.mySpecialFeature.doSomething( [options] );

To see about proving that I could make both myself and the “official” jQuery team happy, I cobbled this test together ..

(function($) {
    
    $.fn.myNamespace = function() {
        var fn = 'default';
        
        var args = $.makeArray(arguments);
        if (args.length > 0 && typeof(args[0]) == 'string' && !(!($.fn.myNamespace[args[0]]))) {
            fn = args[0];
            args = $(args).slice(1);
        }
        $.fn.myNamespace[fn].apply(this, args);
    };
    $.fn.myNamespace.default = function() {
        var s = '\n';
        var i=0;
        $(arguments).each(function() {            
            s += 'arg' + (++i).toString() + '=' + this + '\n';
        });
        alert('Default' + s);
        
    };
    $.fn.myNamespace.alternate = function() {
        var s = '\n';
        var i=0;
        $(arguments).each(function() {            
            s += 'arg' + (++i).toString() + '=' + this + '\n';
        });
        alert('Alternate' + s);
        
    };

    $().myNamespace('asdf', 'xyz');
    $().myNamespace.default('asdf', 'xyz');
    $().myNamespace('default', 'asdf', 'xyz');
    $().myNamespace.alternate('asdf', 'xyz');
    $().myNamespace('alternate', 'asdf', 'xyz');
    
})(jQuery);

Notice the last few lines in there ..

    $().myNamespace('asdf', 'xyz');
    $().myNamespace.default('asdf', 'xyz');
    $().myNamespace('default', 'asdf', 'xyz');
    $().myNamespace.alternate('asdf', 'xyz');
    $().myNamespace('alternate', 'asdf', 'xyz');

When this worked as I hoped I originally set about making this blog post be a “plugin generator plugin” that would make plug-in creation really simple and also enable the above calling convention. But when I got to the some passing tests, adding a few more tests I realized I had failed to notice a critical detail: the this context, and chainability.

In JavaScript, navigating a namespace as with $.fn.myNamespace.something.somethingelse doesn’t execute any code within the dot-notation. Without the execution of functional code, there can be no context for the this context, which should be the jQuery-wrapped selection, and as such there can be no context for the return chainable object. (I realize that it is possible to execute code with modern JavaScript getters and setters but all modern browsers don’t support getters and setters and all commonly used browsers certainly don’t.) This was something that I as a C# developer found easy to forget and overlook, because in C# we take the passing around of context in property getters for granted.

Surprisingly, this technical reasoning for the string-based function identifier for jQuery plug-in function invocations was not mentioned on the jQuery Plugins documentation site, nor was it mentioned in the Pluralsight video-based training I recently perused. It seemed like what Pluralsight’s trainer was saying was, “You can use $().mynamespace.function1()” but that’s obscure! Use a string parameter instead!” And I’m like, “No, it is not obscure! Calling a function by string is obscure because you can’t easily identify it as a function reference distinct from a parameter value!”

The only way to retain the this context while removing the string-based function reference is to invoke it along the way.

$().myNamespace().myFunction('myOption1', true, false);

Notice the parenthesis after .myNamespace. And that is a wholly different convention that few in jQuery-land are used to. But I do think that it is far more readable than ..

$().myNamespace('myFunction', 'myOption1', true, false);

I still like the former, it is more readable, and I remain unsure as to why the latter is the accepted convention over the former, but my guess is that a confused user might try to chain back to jQuery right after .myNamespace() rather than after executing a nested function. And that, I suppose, demonstrates how the former pattern is contrary to jQuery’s chainability design of every().invocation().just().returns().jQuery.

Currently rated 5.0 by 2 people

  • Currently 5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , ,

Javascript


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  January 2019  >>
MoTuWeThFrSaSu
31123456
78910111213
14151617181920
21222324252627
28293031123
45678910

View posts in large calendar