A Consistent Approach To Client-Side Cache Invalidation

by Jon Davis 10. August 2013 17:40

Download the source code for this blog entry here: ClientSideCacheInvalidation.zip

TL;DR?

Please scroll down to the bottom of this article to review the summary.

I ran into a problem not long ago where some JSON results from an AJAX call to an ASP.NET MVC JsonResult action were being cached by the browser, quite intentionally by design, but were no longer up-to-date, and without devising a new approach to route manipulation or any of the other fundamental infrastructural designs for the endpoints (because there were too many) our hands were tied. The caching was being done using the ASP.NET OutputCacheAttribute on the action being invoked in the AJAX call, something like this (not really, but this briefly demonstrates caching):

[OutputCache(Duration = 300)]
public JsonResult GetData()
{
return Json(new
{
LastModified = DateTime.Now.ToString()
}, JsonRequestBehavior.AllowGet);
}
@model dynamic
@{
ViewBag.Title = "Home";
}
<h2>Home</h2>
<div id="results"></div>
<div><button id="reload">Reload</button></div>
@section scripts {
<script>
var $APPROOT = "@Url.Content("~/")";
$.getJSON($APPROOT + "Home/GetData", function (o) {
$('#results').text("Last modified: " + o.LastModified);
});
$('#reload').on('click', function() {
window.location.reload();
});
</script>
}

Since we were using a generalized approach to output caching (as we should), I knew that any solution to this problem should also be generalized. My first thought was in the mistaken assumption that the default [OutputCache] behavior was to rely on client-side caching, since client-side caching was what I was observing while using Fiddler. (Mind you, in the above sample this is not the case, it is actually server-side, but this is probably because of the amount of data being transferred. I’ll explain after I explain what I did in my false assumption.)

Microsoft’s default convention for implementing cache invalidation is to rely on “VaryBy..” semantics, such as varying the route parameters. That is great except that the route and parameters were currently not changing in our implementation.

So, my initial proposal was to force the caching to be done on the server instead of on the client, and to invalidate when appropriate.

 

public JsonResult DoSomething()
{
// 
// Do something here that has a side-effect
// of making the cached data stale
// 
Response.RemoveOutputCacheItem(Url.Action("GetData"));
return Json("OK");
}
[OutputCache(Duration = 300, Location = OutputCacheLocation.Server)]
public JsonResult GetData()
{
return Json(new
{
LastModified = DateTime.Now.ToString()
}, JsonRequestBehavior.AllowGet);
}

 

 

<button id="invalidate">Invalidate</button></div>

 

 

$('#invalidate').on('click', function() {
$.post($APPROOT + "Home/DoSomething", null, function(o) {
window.location.reload();
}, 'json');
});

 

image
While Reload has no effect on the Last modified value, the
Invalidate button causes the date to increment.

When testing, this actually worked quite well. But concerns were raised about the payload of memory on the server. Personally I think the memory payload in practically any server-side caching is negligible, certainly if it is small enough that it would be transmitted over the wire to a client, so long as it is measured in kilobytes or tens of kilobytes and not megabytes. I think the real concern is that transmission; the point of caching is to make the user experience as smooth and seamless as possible with minimal waiting, so if the user is waiting for a (cached) payload, while it may be much faster than the time taken to recalculate or re-acquire the data, it is still measurably slower than relying on browser cache.

The default implementation of OutputCacheAttribute is actually OutputCacheLocation.Any. This indicates that the cached item can be cached on the client, on a proxy server, or on the web server. From my tests, for tiny payloads, the behavior seemed to be caching on the server and no caching on the client; for a large payload from GET requests with querystring parameters seemed to be caching on the client but with an HTTP query with an “If-Modified-Since” header, resulting in a 304 Not Modified on the server (indicating it was also cached on the server but verified by the server that the client’s cache remains valid); and for a large payload from GET requests with all parameters in the path, the behavior seemed to be caching on the client without any validation checking from the client (no HTTP request for an If-Modified-Since check). Now, to be quite honest I am only guessing that these were the distinguishing factors of these behavior observations. Honestly, I saw variations of these behaviors happening all over the place as I tinkered with scenarios, and this was the initial pattern I felt I was observing.

At any rate, for our purposes we were currently stuck with relying on “Any” as the location, which in theory would remove server-side caching if the server ran short on RAM (in theory, I don’t know, although the truth can probably be researched, which I don’t have time to get into). The point of all this is, we have client-side caching that we cannot get away from.

So, how do you invalidate the client-side cache? Technically, you really can’t. The browser controls the cache bucket and no browsers provide hooks into the cache to invalidate them. But we can get smart about this, and work around the problem, by bypassing the cached data. Cached HTTP results are stored on the basis of varying by the full raw URL on HTTP GET methods, they are cached with an expiration (in the above sample’s case, 300 seconds, or 5 minutes), and are only cached if allowed to be cached in the first place as per the HTTP header directives in the HTTP response. So, to bypass the cache you don’t cache, or you need to know up front how long the cache should remain until it expires—neither of these being acceptable in a dynamic application—or you need to use POST instead of GET, or you need to vary up the URL.

Microsoft originally got around the caching problem in ASP.NET 1.x by forcing the “normal” development cycle in the lifecycle of <form> tags that always used the POST method over HTTP. Responses from POST requests are never cached. But POSTing is not clean as it does not follow the semantics of the verbiage if nothing is being sent up and data is only being retrieved.

You can also use ETag in the HTTP headers, which isn’t particularly helpful in a dynamic application as it is no different from a URL + expiration policy.

To summarize, to control cache:

  • Disable caching from the server in the Response header (Pragma: no-cache)
  • Predict the lifetime of the content and use an expiration policy
  • Use POST not GET
  • Etag
  • Vary the URL (case-sensitive)

Given our options, we need to vary up the URL. There a number of approaches to this, but almost all of the approaches involve relying on appending or modifying the querystring with parameters that are expected to be ignored by the server.

$.getJSON($APPROOT + "Home/GetData?_="+Date.now(), function (o) {
$('#results').text("Last modified: " + o.LastModified);
});

In this sample, the URL is appended with “?_=”+Date.now(), resulting in this URL in the GET:

/Home/GetData?_=1376170287015

This technique is often referred to as cache-busting. (And if you’re reading this blog article, you’re probably rolling your eyes. “Duh.”) jQuery inherently supports cache-busting, but it does not do it on its own from $.getJSON(), it only does it in $.ajax() when the options parameter includes {cache: false}, unless you invoke $.ajaxSetup({ cache: false }); first to disable all caching. Otherwise, for $.getJSON() you would have to do it manually by appending the URL. (Alright, you can stop rolling your eyes at me now, I’m just trying to be thorough here..)

This is not our complete solution. We have a couple problems we still have to solve.

First of all, in a complex client codebase, hacking at the URL from application logic might not be the most appropriate approach. Consider if you’re using Backbone.js with routes that synchronize objects to and from the server. It would be inappropriate to modify the routes themselves just for cache invalidation. A more generalized cache invalidation technique needs to be implemented in the XHR-invoking AJAX function itself. The approach in doing this will depend upon your Javascript libraries you are using, but, for example, if jQuery.getJSON() is being used in application code, then jQuery.getJSON itself could perhaps be replaced with an invalidation routine.

var gj = $.getJSON;
$.getJSON = function (url, data, callback) {
url = invalidateCacheIfAppropriate(url); // todo: implement something like this
return gj.call(this, url, data, callback);
};

This is unconventional and probably a bad example since you’re hacking at a third party library, a better approach might be to wrap the invocation of $.getJSON() with an application function.

var getJSONWrapper = function (url, data, callback) {
url = invalidateCacheIfAppropriate(url); // todo: implement something like this
return $.getJSON(url, data, callback);
};

And from this point on, instead of invoking $.getJSON() in application code, you would invoke getJSONWrapper, in this example.

The second problem we still need to solve is that the invalidation of cached data that derived from the server needs to be triggered by the server because it is the server, not the client, that knows that client cached data is no longer up-to-date. Depending on the application, the client logic might just know by keeping track of what server endpoints it is touching, but it might not! Besides, a server endpoint might have conditional invalidation triggers; the data might be stale given specific conditions that only the server may know and perhaps only upon some calculation. In other words, invalidation needs to be pushed by the server.

One brute force, burdensome, and perhaps a little crazy approach to this might be to use actual “push technology”, formerly “Comet” or “long-polling”, now WebSockets, implemented perhaps with ASP.NET SignalR, where a connection is maintained between the client and the server and the server then has this open socket that can push invalidation flags to the client.

We had no need for that level of integration and you probably don’t either, I just wanted to mention it because it might come back as food for thought for a related solution. One scenario I suppose where this might be useful is if another user of the web application has caused the invalidation, in which case the current user will not be in the request/response cycle to acquire the invalidation flag. Otherwise, it is perhaps a reasonable assumption that invalidation is only needed, and only triggered, in the context of a user’s own session. If not, perhaps it is a “good enough” assumption even if it is sometimes not true. The expiration policy can be set low enough that a reasonable compromise can be made between the current user’s changes and changes invoked by other systems or other users.

While we may not know what server endpoint might introduce the invalidation of client cache data, we could assume that the invalidation will be triggered by any server endpoint(s), and build invalidation trigger logic on the response of server HTTP responses.

To begin implementing some sort of invalidation trigger on the server I could flag invalidations to the client using HTTP header(s).

public JsonResult DoSomething()
{
//
// Do something here that has a side-effect
// of making the cached data stale
//
InvalidateCacheItem(Url.Action("GetData"));
return Json("OK");
}
public void InvalidateCacheItem(string url)
{
Response.RemoveOutputCacheItem(url); // invalidate on server
Response.AddHeader("X-Invalidate-Cache-Item", url); // invalidate on client
}
[OutputCache(Duration = 300)]
public JsonResult GetData()
{
return Json(new
{
LastModified = DateTime.Now.ToString()
}, JsonRequestBehavior.AllowGet);
}

At this point, the server is emitting a trigger to the HTTP client that says that “as a result of a recent operation, that other URL, the one for GetData, is no longer valid for your current cache, if you have one”. The header alone can be handled by different client implementations (or proxies) in different ways. I didn’t come across any “standard” HTTP response that does this “officially”, so I’ll come up with a convention here.

image

Now we need to handle this on the client.

Before I do anything first of all I need to refactor the existing AJAX functionality on the client so that instead of using $.getJSON, I might use $.ajax or some other flexible XHR handler, and wrap it all in custom functions such as httpGET()/httpPOST() and handleResponse().

var httpGET = function(url, data, callback) {
return httpAction(url, data, callback, "GET");
};
var httpPOST = function (url, data, callback) {
return httpAction(url, data, callback, "POST");
};
var httpAction = function(url, data, callback, method) {
url = cachebust(url);
if (typeof(data) === "function") {
callback = data;
data = null;
}
$.ajax(url, {
data: data,
type: "GET",
success: function(responsedata, status, xhr) {
handleResponse(responsedata, status, xhr, callback);
}
});
};
var handleResponse = function (data, status, xhr, callback) {
handleInvalidationFlags(xhr);
callback.call(this, data, status, xhr);
};
function handleInvalidationFlags(xhr) {
// not yet implemented
};
function cachebust(url) {
// not yet implemented
return url;
};
// application logic
httpGET($APPROOT + "Home/GetData", function(o) {
$('#results').text("Last modified: " + o.LastModified);
});
$('#reload').on('click', function() {
window.location.reload();
});
$('#invalidate').on('click', function() {
httpPOST($APPROOT + "Home/Invalidate", function (o) {
window.location.reload();
});
});

At this point we’re not doing anything yet, we’ve just broken up the HTTP/XHR functionality into wrapper functions that we can now modify to manipulate the request and to deal with the invalidation flag in the response. Now all our work will be in handleInvalidationFlags() for capturing that new header we just emitted from the server, and cachebust() for hijacking the URLs of future requests.

To deal with the invalidation flag in the response, we need to detect that the header is there, and add the cached item to a cached data set that can be stored locally in the browser with web storage. The best place to put this cached data set is in sessionStorage, which is supported by all current browsers. Putting it in a session cookie (a cookie with no expiration flag) works but is less ideal because it adds to the payload of all HTTP requests. Putting it in localStorage is less ideal because we do want the invalidation flag(s) to go away when the browser session ends, because that’s when the original browser cache will expire anyway. There is one caveat to sessionStorage: if a user opens a new tab or window, the browser will drop the sessionStorage in that new tab or window, but may reuse the browser cache. The only workaround I know of at the moment is to use localStorage (permanently retaining the invalidation flags) or a session cookie. In our case, we used a session cookie.

Note also that IIS is case-insensitive on URI paths, but HTTP itself is not, and therefore browser caches will not be. We will need to ignore case when matching URLs with cache invalidation flags.

Here is a more or less complete client-side implementation that seems to work in my initial test for this blog entry.

function handleInvalidationFlags(xhr) {
// capture HTTP header
var invalidatedItemsHeader = xhr.getResponseHeader("X-Invalidate-Cache-Item");
if (!invalidatedItemsHeader) return;
invalidatedItemsHeader = invalidatedItemsHeader.split(';');
// get invalidation flags from session storage
var invalidatedItems = sessionStorage.getItem("invalidated-cache-items");
invalidatedItems = invalidatedItems ? JSON.parse(invalidatedItems) : {};
// update invalidation flags data set
for (var i in invalidatedItemsHeader) {
invalidatedItems[prepurl(invalidatedItemsHeader[i])] = Date.now();
}
// store revised invalidation flags data set back into session storage
sessionStorage.setItem("invalidated-cache-items", JSON.stringify(invalidatedItems));
}
// since we're using IIS/ASP.NET which ignores case on the path, we need a function to force lower-case on the path
function prepurl(u) {
return u.split('?')[0].toLowerCase() + (u.indexOf("?") > -1 ? "?" + u.split('?')[1] : "");
}
function cachebust(url) {
// get invalidation flags from session storage
var invalidatedItems = sessionStorage.getItem("invalidated-cache-items");
invalidatedItems = invalidatedItems ? JSON.parse(invalidatedItems) : {};
// if item match, return concatonated URL
var invalidated = invalidatedItems[prepurl(url)];
if (invalidated) {
return url + (url.indexOf("?") > -1 ? "&" : "?") + "_nocache=" + invalidated;
}
// no match; return unmodified
return url;
}

Note that the date/time value of when the invalidation occurred is permanently stored as the concatenation value. This allows the data to remain cached, just updated to that point in time. If invalidation occurs again, that concatenation value is revised to the new date/time.

Running this now, after invalidation is triggered by the server, the subsequent request of data is appended with a cache-buster querystring field.

image

 

In Summary, ..

.. a consistent approach to client-side cache invalidation triggered by the server might be by following these steps.

  1. Use X-Invalidate-Cache-Item as an HTTP response header to flag potentially cached URLs as expired. You might consider using a semicolon-delimited response to list multiple items. (Do not URI-encode the semicolon when using it as a URI list delimiter.) Semicolon is a reserved/invalid character in URI and is a valid delimiter in HTTP headers, so this is valid.
  2. Someday, browsers might support this HTTP response header by automatically invalidating browser cache items declared in this header, which would be awesome. In the mean time ...
  3. Capture these flags on the client into a data set, and store the data set into session storage in the format:
    		{
    	"http://url.com/route/action": (date_value_of_invalidation_flag),
    	"http://url.com/route/action/2": (date_value_of_invalidation_flag)
    	}
    	
  4. Hijack all XHR requests so that the URL is appropriately appended with cachebusting querystring parameter if the URL was found in the invalidation flags data set, i.e. http://url.com/route/action becomes something like http://url.com/route/action?_nocache=(date_value_of_invalidation_flag), being sure to hijack only the XHR request and not any logic that generated the URL in the first place.
  5. Remember that IIS and ASP.NET by default convention ignore case (“/Route/Action” == “/route/action”) on the path, but the HTTP specification does not and therefore the browser cache bucket will not ignore case. Force all URL checks for invalidation flags to be case-insensitive to the left of the querystring (if there is a querystring, otherwise for the entire URL).
  6. Make sure the AJAX requests’ querystring parameters are in consistent order. Changing the sequential order of parameters may be handled the same on the server but will be cached differently on the client.
  7. These steps are for “pull”-based XHR-driven invalidation flags being pulled from the server via XHR. For “push”-based invalidation triggered by the server, consider using something like a SignalR channel or hub to maintain an open channel of communication using WebSockets or long polling. Server application logic can then invoke this channel or hub to send an invalidation flag to the client or to all clients.
  8. On the client side, an invalidation flag “push” triggered in #7 above, for which #1 and #2 above would no longer apply, can still utilize #3 through #6.

You can download the project I used for this blog entry here: ClientSideCacheInvalidation.zip

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

ASP.NET | C# | Javascript | Techniques | Web Development

Four Methods Of Simple Caching In .NET

by Jon Davis 30. August 2010 01:07

Caching is a fundamental component of working software. The role of any cache is to decrease the performance footprint of data I/O by moving it as close as possible to the execution logic. At the lowest level of computing, a thread relies on a CPU cache to be loaded up each time the thread context is switched. At higher levels, caches are used to offload data from a database into a local application memory store or, say, a Lucene directory for fast indexing of data.

Although there are some decent hashtable-become-dictionary scenarios in .NET as it has evolved, until .NET 4.0 there was never a general purpose caching table built directly into .NET. By “caching table” I mean a caching mechanism where keyed items get put into it but eventually “expire” and get deleted, whether by:

a) a “sliding” expiration whereby the item expires in a timespan-determined amount of time from the time it gets added,

b) an explicit DateTime whereby the item expires as soon as that specific DateTime passes, or

c) the item is prioritized and is removed from the collection when the system needs to free up some memory.

There are some methods people can use, however.

Method One: ASP.NET Cache

ASP.NET, or the System.Web.dll assembly, does have a caching mechanism. It was never intended to be used outside of a web context, but it can be used outside of the web, and it does perform all of the above expiration behaviors in a hashtable of sorts.

After scouring Google, it appears that quite a few people who have discussed the built-in caching functionality in .NET have resorted to using the ASP.NET cache in their non-web projects. This is no longer the most-available, most-supported built-in caching system in .NET; .NET 4 has an ObjectCache which I’ll get into later. Microsoft has always been adamant that the ASP.NET cache is not intended for use outside of the web. But many people are still stuck in .NET 2.0 and .NET 3.5, and need something to work with, and this happens to work for many people, even though MSDN says clearly:

Note:

The Cache class is not intended for use outside of ASP.NET applications. It was designed and tested for use in ASP.NET to provide caching for Web applications. In other types of applications, such as console applications or Windows Forms applications, ASP.NET caching might not work correctly.

The class for the ASP.NET cache is System.Web.Caching.Cache in System.Web.dll. However, you cannot simply new-up a Cache object. You must acquire it from System.Web.HttpRuntime.Cache.

Cache cache = System.Web.HttpRuntime.Cache;

Working with the ASP.NET cache is documented on MSDN here.

Pros:

  1. It’s built-in.
  2. Despite the .NET 1.0 syntax, it’s fairly simple to use.
  3. When used in a web context, it’s well-tested. Outside of web contexts, according to Google searches it is not commonly known to cause problems, despite Microsoft recommending against it, so long as you’re using .NET 2.0 or later.
  4. You can be notified via a delegate when an item is removed, which is necessary if you need to keep it alive and you could not set the item’s priority in advance.
  5. Individual items have the flexibility of any of (a), (b), or (c) methods of expiration and removal in the list of removal methods at the top of this article. You can also associate expiration behavior with the presence of a physical file.

Cons:

  1. Not only is it static, there is only one. You cannot create your own type with its own static instance of a Cache. You can only have one bucket for your entire app, period. You can wrap the bucket with your own wrappers that do things like pre-inject prefixes in the keys and remove these prefixes when you pull the key/value pairs back out. But there is still only one bucket. Everything is lumped together. This can be a real nuisance if, for example, you have a service that needs to cache three or four different kinds of data separately. This shouldn’t be a big problem for pathetically simple projects. But if a project has any significant degree of complexity due to its requirements, the ASP.NET cache will typically not suffice.
  2. Items can disappear, willy-nilly. A lot of people aren’t aware of this—I wasn’t, until I refreshed my knowledge on this cache implementation. By default, the ASP.NET cache is designed to destroy items when it “feels” like it. More specifically, see (c) in my definition of a cache table at the top of this article. If another thread in the same process is working on something completely different, and it dumps high-priority items into the cache, then as soon as .NET decides it needs to require some memory it will start to destroy some items in the cache according to their priorities, lower priorities first. All of the examples documented here for adding cache items use the default priority, rather than the NotRemovable priority value which keeps it from being removed for memory-clearing purposes but will still remove it according to the expiration policy. Peppering CacheItemPriority.NotRemovable in cache invocations can be cumbersome, otherwise a wrapper is necessary.
  3. The key must be a string. If, for example, you are caching data records where the records are keyed on a long or an integer, you must convert the key to a string first.
  4. The syntax is stale. It’s .NET 1.0 syntax, even uglier than ArrayList or Hashtable. There are no generics here, no IDictionary<> interface. It has no Contains() method, no Keys collection, no standard events; it only has a Get() method plus an indexer that does the same thing as Get(), returning null if there is no match, plus Add(), Insert() (redundant?), Remove(), and GetEnumerator().
  5. Ignores the DRY principle of setting up your default expiration/removal behaviors so you can forget about them. You have to explicitly tell the cache how you want the item you’re adding to expire or be removed every time you add add an item.
  6. No way to access the caching details of a cached item such as the timestamp of when it was added. Encapsulation went a bit overboard here, making it difficult to use the cache when in code you’re attempting to determine whether a cached item should be invalidated against another caching mechanism (i.e. session collection) or not.
  7. Removal events are not exposed as events and must be tracked at the time of add.
  8. And if I haven’t said it enough, Microsoft explicitly recommends against it outside of the web. And if you’re cursed with .NET 1.1, you not supposed to use it with any confidence of stability at all outside of the web so don’t bother.

Method Two: The Enterprise Library Caching Application Block

Microsoft’s recommendation, up until .NET 4.0, has been that if you need a caching system outside of the web then you should use the Enterprise Library Caching Application Block.

The Microsoft Enterprise Library is a coordinated effort between Microsoft and a third party technology partner company Avanade, mostly the latter. It consists of multiple meet-many-needs general purpose technology solutions, some of which are pretty nice but many of which are in an outdated format such as a first-generation O/RM that doesn’t support LINQ.

I have personally never used the EntLib Caching App Block. It looks bloated. Others on the web have commented that they thought it was bloated, too, but once they started using it they saw that it was pretty simple and straightfoward. I personally am not sold, but since I haven’t tried it I cannot pass a fair judgment. So for whatever its worth, here are some pros/cons:

Pros:

  1. Recommended by Microsoft as the cache mechanism for non-web projects.
  2. The syntax looks to be familiar to those who were using the ASP.NET cache.

Cons:

  1. Not built-in. The assemblies must be downloaded and separately referenced.
  2. Not actually created by Microsoft. (EntLib is an Avanade solution that Microsoft endorses as its own.)
  3. The syntax is the same .NET 1.0 style stuff that I believe defaults to normal priority (auto-ejects the items when memory management feels like it) rather than NotRemovable priority, and does not reinforce the DRY principles of setting up your default expiration/removal behaviors so you can forget about them.
  4. Potentially bloated.

Method Three: .NET 4.0’s ObjectCache / MemoryCache

Microsoft finally implemented an abstract ObjectCache class in the latest version of the .NET Framework, and a MemoryCache implementation that inherits and implements ObjectCache for in-memory purposes in a non-web setting.

System.Runtime.Caching.ObjectCache is in the System.Runtime.Caching.dll assembly. It is an abstract class that that declares basically the same .NET 1.0 style interfaces that are found in the ASP.NET cache. System.Runtime.Caching.MemoryCache is the in-memory implementation of ObjectCache and is very similar to the ASP.NET cache, with a few changes.

To add an item with a sliding expiration, your code would look something like this:

var config = new NameValueCollection(); 
var cache = new MemoryCache("myMemCache", config); 
cache.Add(new CacheItem("a", "b"), 
new CacheItemPolicy 
{ 
Priority = CacheItemPriority.NotRemovable, 
SlidingExpiration=TimeSpan.FromMinutes(30) 
});

Pros:

  1. It’s built-in, and now supported and recommended by Microsoft outside of the web.
  2. Unlike the ASP.NET cache, you can instantiate a MemoryCache object instance.
    Note: It doesn’t have to be static, but it should be—that is Microsoft’s recommendation (see yellow Caution).
  3. A few slight improvements have been made vs. the ASP.NET cache’s interface, such as the ability to subscribe to removal events without necessarily being there when the items were added, the redundant Insert() was removed, items can be added with a CacheItem object with an initializer that defines the caching strategy, and Contains() was added.

Cons:

  1. Still does not fully reinforce DRY. From my small amount of experience, you still can’t set the sliding expiration TimeSpan once and forget about it. And frankly, although the policy in the item-add sample above is more readable, it necessitates horrific verbosity.
  2. It is still not generically-keyed; it requires a string as the key. So you can’t store as long or int if you’re caching data records, unless you convert to string.

Method Four: Build One Yourself

It’s actually pretty simple to create a caching dictionary that performs explicit or sliding expiration. (It gets a lot harder if you want items to be auto-removed for memory-clearing purposes.) Here’s all you have to do:

  1. Create a value container class called something like Expiring<T> or Expirable<T> that would contain a value of type T, a TimeStamp property of type DateTime to store when the value was added to the cache, and a TimeSpan that would indicate how far out from the timestamp that the item should expire. For explicit expiration you can just expose a property setter that sets the TimeSpan given a date subtracted by the timestamp.
  2. Create a class, let’s call it ExpirableItemsDictionary<K,T>, that implements IDictionary<K,T>. I prefer to make it a generic class with <K,T> defined by the consumer.
  3. In the the class created in #2, add a Dictionary<K,Expiring<T>> as a property and call it InnerDictionary.
  4. The implementation if IDictionary<K,T> in the class created in #2 should use the InnerDictionary to store cached items. Encapsulation would hide the caching method details via instances of the type created in #1 above.
  5. Make sure the indexer (this[]), ContainsKey(), etc., are careful to clear out expired items and remove the expired items before returning a value. Return null in getters if the item was removed.
  6. Use thread locks on all getters, setters, ContainsKey(), and particularly when clearing the expired items.
  7. Raise an event whenever an item gets removed due to expiration.
  8. Add a System.Threading.Timer instance and rig it during initialization to auto-remove expired items every 15 seconds. This is the same behavior as the ASP.NET cache.
  9. You may want to add an AddOrUpdate() routine that pushes out the sliding expiration by replacing the timestamp on the item’s container (Expiring<T> instance) if it already exists.

Microsoft has to support its original designs because its user base has built up a dependency upon them, but that does not mean that they are good designs.

Pros:

  1. You have complete control over the implementation.
  2. Can reinforce DRY by setting up default caching behaviors and then just dropping key/value pairs in without declaring the caching details each time you add an item.
  3. Can implement modern interfaces, namely IDictionary<K,T>. This makes it much easier to consume as its interface is more predictable as a dictionary interface, plus it makes it more accessible to helpers and extension methods that work with IDictionary<>.
  4. Caching details can be unencapsulated, such as by exposing your InnerDictionary via a public read-only property, allowing you to write explicit unit tests against your caching strategy as well as extend your basic caching implementation with additional caching strategies that build upon it.
  5. Although it is not necessarily a familiar interface for those who already made themselves comfortable with the .NET 1.0 style syntax of the ASP.NET cache or the Caching Application Block, you can define the interface to look like however you want it to look.
  6. Can use any type for keys. This is one reason why generics were created. Not everything should be keyed with a string.

Cons:

  1. Is not invented by, nor endorsed by, Microsoft, so it is not going to have the same quality assurance.
  2. Assuming only the instructions I described above are implemented, does not “willy-nilly” clear items for clearing memory on a priority basis (which is a corner-case utility function of a cache anyway .. BUY RAM where you would be using the cache, RAM is cheap).

     

Among all four of these options, this is my preference. I have implemented this basic caching solution. So far, it seems to work perfectly, there are no known bugs (please contact me with comments below or at jon-at-jondavis if there are!!), and I intend to use it in all of my smaller side projects that need basic caching. Here it is: 

Download: ExpirableItemDictionary.zip

Worthy Of Mention: AppFabric, NoSQL, Et Al

Notice that the title of this blog article indicates “Simple Caching”, not “Heavy-Duty Caching”. If you want to get into the heavy-duty stuff, you should look at memcached, AppFabric, ScaleOut, or any of the many NoSQL solutions that are shaking up the blogosphere and Q&A forums.

By “heavy duty” I mean scaled-out solutions, as in scaled across multiple servers. There are some non-trivial costs associated with scaling out

Scott Hanselman has a blog article called, “Installing, Configuring and Using Windows Server AppFabric and the ‘Velocity’ Memory Cache in 10 Minutes”. I already tried installing AppFabric, it was a botched and failed job, but I plan to try to tackle it again, now that hopefully Microsoft has updated their online documentation and provided us with some walk-throughs, i.e. Scott’s.

As briefly hinted in the introduction of this article, another approach to caching is using something like Lucene.Net to store database data locally. Lucene calls itself a “search engine”, but really it is a NoSQL [Wikipedia link] storage mechanism in the form of a queryable index of document-based tables (called “directories”).

Other NoSQL options abound, most of them being Linux-oriented like CouchDB (which runs but stinks on Windows) but not all of them. For example, there’s RavenDB from Oren Eini (author of the popular ayende.com blog) which is built entirely in and for .NET as a direct competitor to the likes of Lucene.net. There are a whole bunch of other NoSQL options listed over here.

I have had the pleasure of working with Lucene.net and the annoyance of poking at CouchDB on Windows (Linux folks should not write Windows software, bleh .. learn how to write Windows services, folks), but not much else. Perhaps I should take a good look at RavenDB next.

 

Currently rated 3.6 by 20 people

  • Currently 3.55/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

C# | Software Development

I Don’t Much Get Go

by Jon Davis 24. August 2010 01:53

When Google announced their new Go programming language, I was quite excited and happy. Yay, another language to fix all the world’s problems! No more suckage! Suckage sucks! Give me a good language that doesn’t suffer suckage, so that my daily routine can suck less!

And Google certainly presented Go as “a C++ fixxer-upper”. I just watched this video of Rob Pike describing the objectives of Go, and I can definitely say that Google’s mantra still lines up. The video demonstrates a lot of evils of C++. Despite some attempts at self-edumacation, I personally cannot read nor write real-world C++ because I get lost in the gobbligook that he demonstrated.

But here’s my comment on YouTube:

100% of the examples of "why C++ and Java suck" are C++. Java's not anywhere near that bad. Furthermore, The ECMA open standard language known as C#--brought about by a big, pushy company no different in this space than Google--already had the exact same objectives as Java and now Go had, and it has actually been fundamentally evolving at the core, such as to replace patterns with language features (as seen in lambdas and extension methods). Google just wanted to be INDEPENDENT, typical anti-MS.

While trying to take Go in with great excitement, I’ve been forced to conclude that Google’s own message delivery sucks, mainly by completely ignoring some of the successful languages in the industry—namely C# (as well as some of the lesser-used excellent languages out there)—much like Microsoft’s message delivery of C#’s core objectives somewhat sucked by refraining from mentioning Java even once when C# was announced (I spent an hour looking for the C# announcement white paper from 2000/2001 to back up this memory but I can’t find it). The video linked above doesn’t even show a single example of Java suckage; it just made these painful accusations of Java being right in there with C++ as being a crappy language to work with. I haven’t coded in Java in about a decade, honestly, but back then Java was the shiznat and code was beautiful, elegant, and more or less easy to work with.

Meanwhile, Java has been evolving in some strange ways and ultimately I find it far less appetizing than C#. But where is Google’s nod to C#? Oh that’s right, C# doesn’t exist, it’s a fragment of someone’s imagination because Google considers Microsoft (C#’s maintainer) a competitor, duh. This is an attitude that should make anyone automatically skeptical of the language creator’s true intentions, and therefore of the language itself. C# actually came about in much the same way as Go did as far as trying to “fix” C++. In fact, most of the problems Go describes of C++ were the focus of C#’s objectives, along with a few thousand other objectives. Amazingly, C# has met most of its objectives so far.

If we break down Google’s objectives themselves, we don’t see a lot of meat. What we find, rather, are Google employees trying to optimize their coding workflow for previously C++ development efforts using perhaps emacs or vi (Rob even listed IDEs as a failure in modern languages). Their requirements in Go actually appear to be rather trivial. It seems that they want to write quick-and-easy C-syntax-like code that doesn’t get in the way of their business objectives, that performs very fast, and fast compilation that lets them escape out of vi to invoke gcc or whatever compiler very quickly and go back to coding. These are certainly great nice-to-haves, but I’m pretty sure that’s about it.

Consider, in contrast, .NET’s objectives a decade ago, .NET being at the core of applied C# as C# runs on the CLR (the .NET runtime):

  • To provide a very high degree of language interoperability
    • Visual Basic and C++ and Java, oh my! How do we get them to talk to each other with high performance?
    • COM was difficult to swallow. It didn’t suck because its intentions were gorgeous—to have a language-netural marshalling paradigm between runtimes—but then the same objectives were found in CORBA, and that sucked.
    • Go doesn’t even have language interoperability. It has C (and only C) function invocators. Bleh! Google is not in the real world!
  • To provide a runtime environment that completely manages code execution
    • This in itself was not a feature, it was a liability. But it enabled a great deal, namely consolidating QA resources for low-level functionality, which in turn brought about instantaneous quality and productivity on Microsoft’s part across the many languages and the tools because fewer resources had to focus on duplicate details.
    • The Mono runtime can run a lot of languages now. It is slower than C++, but not by a significant level. A C# application, fully ngen’d (precompiled to machine-level code), will execute at roughly 90-95% of C++’s and thus theoretically Go’s performance, which frankly is pretty darn good.
  • To provide a very simple software deployment and versioning model
    • A real-world requirement which Google in its corporate and web sandboxes is oblivious to, I’m not sure that Go even has a versioning model
  • To provide high-level code security through code access security and strong type checking
    • Again, a real-world requirement which Google in its corporate and web sandboxes is oblivious to, since most of their code is only exposed to the public via HTML/REST/JSON/SOAP.
  • To provide a consistent object-oriented programming model
    • It appears that Go is not an OOP language. There is no class support in Go. No objects at all, really. Just primitives, arrays, and structs. Surpriiiiise!! :D
  • To facilitate application communication by using industry standards such as SOAP and XML.
  • To simplify Web application development
    • I really don’t see Google innovating here, instead they push Python and Java on their app cloud? I most definitely don’t see this applying to Go at all.
  • To support hardware independence and portability
    • Although the implementation of this (JIT) is a liability, the objective is sound. Old-skool Linux folks didn’t get this; it’s stupid to have to recompile an application’s distribution, software should be precompiled.
    • Java and .NET are on near-equal ground here. When Java originally came about, it was the silver bullet for “Write Once, Run Anywhere”. With the successful creation and widespread adoption of the Mono runtime, .NET has the same portability. Go, however, requires recompilation. Once again, Google is not out in the real world, they live in a box (their headquarters and their exposed web).

And with the goals of C#,

  • C# language is intended to be a simple, modern, general-purpose, object-oriented programming language.
    • Go: “OOP is cruft.”
  • The language, and implementations thereof, should provide support for software engineering principles such as strong type checking, array bounds checking, detection of attempts to use uninitialized variables, and automatic garbage collection. Software robustness, durability, and programmer productivity are important.
    • Go: “Um, check, maybe. Especially productivity. Productivity means clean code.”
    • (As I always say, the more you know, the more you realize how little you know. Clearly you think you’ve got it all down, little Go.)
  • The language is intended for use in developing software components suitable for deployment in distributed environments.
    • Go: “Yeah we definitely want that. We’re Google.”
  • Source code portability is very important, as is programmer portability, especially for those programmers already familiar with C and C++.
    • Go: “Just forget C++. It’s bad. But the core syntax (curly braces) is much the same, so ... check!”
  • Support for internationalization is very important.
  • C# is intended to be suitable for writing applications for both hosted and embedded systems, ranging from the very large that use sophisticated operating systems, down to the very small having dedicated functions.
    • Go: “Check!”
    • (Yeah, except that Go isn’t an applications platform. At all. So, no. Uncheck that.)
  • Although C# applications are intended to be economical with regard to memory and processing power requirements, the language was not intended to compete directly on performance and size with C or assembly language.

Right now, Go just looks like a syntax with a few basic support classes for I/O and such. I must confess I was somewhat unimpressed by what I saw at Go’s web site (http://golang.org/) because the language does not look like much of a readability / maintainability improvement to what Java and C# offered up.

  • Go supposedly offers up memory management, but still heavily uses pointers. (C# supports pointers, too, by the way, but since pointers are not safe you must declare your code as “containing unsafe code”. Most C# code strictly uses type-checked references.)
  • Go eliminates semicolons as statement terminators. “…they are inserted automatically at the end of every line that looks like the end of a statement…” Sorry, but semicolons did not make C++ unreadable or unmaintainable
    Personally I think code without punctuation (semicolons) looks like English grammar without punctuations (no period)
    You end up with what look like run-on sentences
    Of course they’re not run-on sentences, they’re just lazily written ones with poor grammar
    wat next, lolcode?
  • “{Sample tutorial code} There is no implicit this and the receiver variable must be used to access members of the structure.” Wait, what, what? Hey, I have an idea, let’s make all functions everywhere static!
  • Actually, as far as I can tell, Go doesn’t have class support at all. It just has primitives, arrays, and structs.
  • Go uses the := operator syntax rather than the = operator for assignment. I suppose this would help eliminate the issue where people would type = where they meant to type == and destroy their variables.
  • Go has a nice “defer” statement that is akin to C#’s using() {} and try...finally blocks. It allows you to be lazy and disorganized such that late-executed code that should called after immediate code doesn’t require putting it below immediate code, we can just sprinkle late-executed code in as we go. We really needed that. (Except, not.) I think defer’s practical applicability is for some really lightweight AOP (Aspect Oriented Programming) scenarios, except that defer is a horrible approach to it.
  • Go has both new() and make(). I feel like I’m learning C++ again. It’s about those pesky pointers ...
    • Seriously, how the heck is
       
        var p *[]int = new([]int) // allocates slice structure; *p == nil; rarely useful
        var v  []int = make([]int, 100) // the slice v now refers to a new array of 100 ints

       
      .. a better solution to “improving upon” C++ with a new language than, oh I don’t know ..
       
        int[] p = null; // declares an array variable; p is null; rarely useful
        var v = new int[100]; // the variable v now refers to a new array of 100 ints

       
      ..? I’m sure I’m missing something here, particularly since I don’t understand what a “slice” is, but I suspect I shouldn’t care. Oh, nevermind, I see now that it “is a three-item descriptor containing a pointer to the data (inside an array), the length, and the capacity; until those items are initialized, the slice is nil.” Great. More pointer gobbligook. C# offers richly defined System.Array and all this stuff is transparent to the coder who really doesn’t need to know that there are pointers, somewhere, associated with the reference to your array, isn’t that the way it all should be? Is it really necessary to have a completely different semantic (new() vs. make())? Ohh yeah. The frickin pointer vs. the reference.
  • I see Go has a fmt.Printf(), plus a fmt.Fprintf(), plus a fmt.Sprintf(), plus Print() plus Println(). I’m beginning to wonder if function overloading is missing in Go. I think it is; http://golang.org/search?q=overloading
  • Go has “goroutines”. It’s basically, “go func() { /* do stuff */ }” and it will execute the code as a function on the fly, in parallel. In C# we call these anonymous delegates, and delegates can be passed along to worker thread pool threads on the fly with only one line of code, so yes, it’s supported. F# (a young .NET sibling of C#) has this, too, by the way, and its support for inline anonymous delegate declarations and spawning them off in parallel is as good as Go’s.
  • Go has channels for communication purposes. C# has WCF for this which is frankly a mess. The closest you can get to Go on the CLR as far as channels go is Axum, which is variation of C# with rich channel support.
  • Go does not throw exceptions. It panics, from which it might recover.

While I greatly respect the contributions Google has made to computing science, and their experience in building web-scalable applications (that, frankly, typically suck at a design level when they aren’t tied to the genius search algorithms), and I have no doubt that Google is an experienced web application software developer with a lot of history, honestly I think they are clueless when it comes to real-world applications programming solutions. Microsoft has been demonized the world over since its beginnings, but one thing they and few others have is some serious, serious real-world experience with applications. Between all of the web sites and databases and desktop applications combined everywhere on planet Earth through the history of man, Microsoft has probably been responsible for the core applications plumbing for the majority of it all, followed perhaps by Oracle. (Perhaps *nix and applications and services that run on it has been the majority; if nothing else, Microsoft has most certainly still had the lead in software as a company, to which my point is targeted.)

It wasn’t my intention to make this a Google vs. Microsoft debate, but frankly the fact that Go presentations neglect C# severely causes question to Go’s trustworthiness.

In my opinion, a better approach to what Google was trying to do with Go would be to take a popular language, such as C#, F#, or Axum, and break it away from the language’s implementation libraries, i.e. the .NET platform’s BCL, replacing them with the simpler constructs, support code, and lightweight command-line tooling found in Go, and then wrap the compiler to force it to natively compile to machine code (ngen). Honestly, I think that would be both a) a much better language and runtime than Go because it would offer most of the benefits of Go but in a manner that retains most or all of the advantages of the selected runtime (i.e. the CLR’s and C#’s multitude of advantages over C/C++), but also b) a flop, and a waste of time, because C# is not really broken. Coupled with F#, et al, our needs are quite well met. So thanks anyway, Google, but, really, you should go now.

Currently rated 3.2 by 6 people

  • Currently 3.166667/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , ,

C# | F# | General Technology | Mono | Opinion | Software Development

How To Reference A Property By Name Without Using String Constants

by Jon Davis 13. June 2010 02:39

I’m working on some code here where I’m loading an entity using a dictionary. I’m not using .NET 4.0 so I’m not going to get to use the dynamic keyword and its IDictionary implementation of properties reflection.

My original code looked something sort of like this:

public void Populate(IDictionary<string, object> values) 
{
foreach (var kvp in values) {
var key = kvp.Key;
var value = kvp.Value;
switch (key) 
{
case "FirstName":
this.FirstName = value;
break;
case "LastName":
this.LastName = value;
break;
// and so on and so forth
}
}
}

This looks sloppy. First of all, I don’t want to have to declare each and every property again for its setter just to support IDictionary. Secondly, what if I rename my properties? Then I’d have to go back and update this switch…case statement again. I want to use a strongly-typed property reference, so that if I rename the property I can use the IDE’s built-in refactoring functionality with which all uses of that property are automatically updated. I don’t get that benefit if I’m using string constants.

I’d come across some interesting tutorials in the past that addressed this problem, and I thought I’d give it my own go based on their insight. My revised solution is not by any means original. (Nothing I ever do is, I suppose.)

To make a long story short, I flexed some recently learned LINQ expression muscle on a member evaluator expression to obtain the strongly typed property’s name. In my implementation (not entirely shown here) I actually do need to verify the property by name because I have a bunch of setter logic that I don’t expose here, such as replacing the value being an IEnumerable<string> with logic that clears an ObservableCollection<string> and populates it with the value. Anyway, here’s the basic flow:

public override void Populate(IDictionary<string, object> values)
{
foreach (var kvp in values) {
if (key == base.ResolvePropertyName(() => this.SpecialProperty))
{
// this.SpecialProperty = value;
// do something special with setter here
}
else if (key == base.ResolvePropertyName(() => this.AnotherSpecialProperty))
{
// this.AnotherSpecialProperty = value;
// do something special with setter here
}
else 
{
base.Populate(key, value);
}
}
}
/// and in the base class ....
/// <summary>
/// Returns the name of the property. Syntax:
/// <code>
/// var propertyName = ResolvePropertyName(() =&gt; this.MyProperty)
/// </code>
/// </summary>
/// <param name="e"></param>
/// <returns></returns>
protected string ResolvePropertyName(Expression<Func<T>> e)
{
if (e.NodeType == ExpressionType.Lambda)
{
return ResolvePropertyName(((LambdaExpression)e).Body);
}
if (e.NodeType == ExpressionType.MemberAccess)
{
return ((MemberExpression)e).Member.Name;
}
throw new InvalidOperationException("Given expression is not type MemberAccess.");
}
private static Dictionary<Type, List<PropertyInfo>> TypeProperties
= new Dictionary<Type, List<PropertyInfo>>();
/// <summary>
/// Populates a property of the current entity with the the given key/value pair.
/// </summary>
/// <param name="key"></param>
/// <param name="value"></param>
public void Populate(string property, object value)
{
var t = this.GetType();
List<PropertyInfo> pis;
if (!TypeProperties.ContainsKey(t))
{
TypeProperties[t] = t.GetProperties().ToList();
}
pis = TypeProperties[t];
var pi = pis.Find(pif => pif.Name == property);
if (pi == null) throw new KeyNotFoundException("Property does not exist: \"" + property + "\"");
pi.SetValue(this, value, null);
}

 

The Populate(key,value) method is basic .NET 2.0 reflection with some sorta-kinda basic caching.  The method above it is using LINQ expressions to extract the member access invocation as expressed in the LINQ statement.

I wrote a unit test to invoke this and stepped through it with the debugger. Seems to work fine.

After further tweaking, I migrated the base class functions to a utility class so that these functions can be used with anything, not just with objects that inherit a common base class. I also improved the reflection caching of Populate() so that the properties are properly hashed on their property names.

public static class ObjectUtility
{
/// <summary>
/// Returns the name of the property. Syntax:
/// <code>
/// var propertyName = obj.ResolvePropertyName(() =&gt; obj.MyProperty)
/// </code>
/// </summary>
/// <typeparam name="T"></typeparam>
/// <param name="obj"></param>
/// <param name="expr"></param>
/// <returns></returns>
public static string ResolvePropertyName<T>(this object obj, Expression<Func<T>> expr)
{
return ResolvePropertyName(expr);
}
/// <summary>
/// Returns the name of the property. Syntax:
/// <code>
/// var propertyName = ObjectUtility.ResolvePropertyName(() =&gt; obj.MyProperty)
/// </code>
/// </summary>
/// <param name="e"></param>
/// <returns></returns>
public static string ResolvePropertyName(Expression e)
{
if (e.NodeType == ExpressionType.Lambda)
{
return ResolvePropertyName(((LambdaExpression)e).Body);
}
if (e.NodeType == ExpressionType.MemberAccess)
{
return ((MemberExpression)e).Member.Name;
}
throw new InvalidOperationException("Given expression is not type MemberAccess.");
}
private static Dictionary<Type, Dictionary<string, PropertyInfo>> TypeProperties
= new Dictionary<Type, Dictionary<string, PropertyInfo>>();
/// <summary>
/// Populates a property of an object with the the given key/value pair.
/// </summary>
/// <param name="obj"></param>
/// <param name="property"></param>
/// <param name="value"></param>
public static void Populate(this object obj, string property, object value)
{
var t = obj.GetType();
if (!TypeProperties.ContainsKey(t))
{
TypeProperties[t] = new Dictionary<string, PropertyInfo>();
var lst = t.GetProperties().ToList();
foreach (var item in lst)
{
if (!TypeProperties[t].ContainsKey(item.Name))
{
TypeProperties[t][item.Name] = item;
}
}
}
var pisdic = TypeProperties[t];
var pi = pisdic.ContainsKey(property) ? pisdic[property] : null;
if (pi == null) throw new KeyNotFoundException("Property does not exist: \"" + property + "\"");
pi.SetValue(obj, value, null);
}
}

 

 

Gemli.Data: Basic LINQ Support

by Jon Davis 6. June 2010 04:13

In the Development branch of Gemli.Data, I finally got around to adding some very, very basic LINQ support. The following test scenarios currently seem to function correctly:

var myCustomEntityQuery = new DataModelQuery<DataModel<MockObject>>();
// Scenarios 1-3: Where() lambda, boolean operator, method exec
var linqq = myCustomEntityQuery.Where(mo => mo.Entity.MockStringValue == "dah");
linqq = myCustomEntityQuery.Where(mo => mo.Entity.MockStringValue != "dah");
// In the following scenario, GetPropertyValueByColumnName() is an explicitly supported method
linqq = myCustomEntityQuery.Where(mo=>((int)mo.GetPropertyValueByColumnName("customentity_id")) > -1);
// Scenario 4: LINQ formatted query
var q = (from mo in myCustomEntityQuery
where mo.Entity.MockStringValue != "st00pid"
select mo) as DataModelQuery<DataModel<MockObject>>;
// Scenario 5: LINQ formatted query execution with sorted ordering
var orderedlist = (from mo in myCustomEntityQuery
where mo.Entity.MockStringValue != "def"
orderby mo.Entity.MockStringValue
select mo).ToList();
// Scenario 6: LINQ formatted query with multiple conditions and multiple sort members
var orderedlist = (from mo in myCustomEntityQuery
where mo.Entity.MockStringValue != "def" && mo.Entity.ID < 3
orderby mo.Entity.ID, mo.Entity.MockStringValue
select mo).ToList();

This is a great milestone, one I’m very pleased with myself for finally accomplishing. There’s still a ton more to do but these were the top 50% or so of LINQ support scenarios needed in Gemli.Data.

Unfortunately, adding LINQ support brought about a rather painful rediscovery of critical missing functionality: the absence of support for OR (||) and condition groups in Gemli.Data queries. *facepalm*  I left it out earlier as a to-do item but completely forgot to come back to it. That’s next on my plate. *burp*

Windows Phone 7 SDK First Impressions

by Jon Davis 25. April 2010 13:20

A couple co-workers and I had a couple days’ opportunity at my day job to tinker with creating a Windows Phone 7 app as a proof of concept. We successfully created an app that lets you find and purchase a company product (complete with checkout, although the checkout part was web-based), access account information via a web interface, and contact customer support via either an e-mail form or a “Call Us” button that, yes, would dial customer support via the Phone function of the device. (Strangely, the Phone part was the only part that did not work nor give any “Not implemented” nor “Not supported” feedback.)

We accomplished all that in two days, plus some reading up on XAML knowledge (i.e. I’d spent the entire prior Saturday wading through the Petzold PDF that had been linked from the developer web site .. where is that Petzold PDF link now?), plus some insider knowledge of existing APIs and some preexisting web interfaces used with other mobile devices. The only huge hold-up for us, as far as I was concerned, was the learning curve of the tool chain. I mean, it’s C# which I’m fluent in, but it’s also XAML which I’ve used but never mastered, and it’s brand-new Silverlight 4.0 (a binary-incompatible derivative thereof, I think). And I’m still pretty green to MVVM and still don’t understand a lot of the basics of MVVM such as how and when to use ICommand, etc. But we got a prototype built, with only two days to do it.

That said and done, however, I walked away from that little mini-project with a few opinions. A couple years ago I tinkered with iPhone development with the Xcode / Obj-C / Interface Builder tool chain, so I have the two phone SDK experiences to compare.

One theme – “minimalistic” – to rule them all?

My immediate observation is that the UI aesthetic guidance in the Apple toolchain is nowhere to be seen in Microsoft’s toolchain, and I expect this will hurt Microsoft. Sure, Microsoft pre-packages a couple white-on-black templates as an aesthetic suggestion to get you started with a basic theme. But that’s where guidance ends. You’re given a few XAML controls to get you started, and everything starts at a bland white-on-black. A button, for example, it’s just a white rectangle, no gradient, no border bezel, same story as the white-square-on-a-white-line slider control, all to reinforce the idea that white-on-black is good because OLED displays consume 50% as much energy this way versus 3x more energy if everything displays in full color.

image 
Wow, Microsoft, you outdid yourselves here, that is a
beautiful slider handle! (Not.)

And on that latter note, why didn’t Microsoft innovate here, such as this: as soon as the user interacts with the device, for a full 15-30 seconds everything’s in rich, full color, then after 30 seconds of inactivity the view goes into “interactive screen saver” mode whereby the colors fade to inverse and you see the color scheme that’s on the device color scheme now? Toggle this behavior for your app or navigation “page” in the SDK with a boolean switch and a timespan setting. Just a thought.

Whereas, with Apple, in Interface Builder (Apple’s GUI designer) you’re given a nice big library of richly colored controls and widgets including a nice tab control on the bottom, a header control on top where you can have your richly labeled back and forward buttons, and a number of nice interactive controls such as a “checkbox” that actually renders as a beautifully detailed ON/OFF slider, complete with simulated physics and even a drop shadow on the moving slider.

It’s not the absence of aesthetic tooling that bothers me here. Actually, Microsoft opened the door wide open to aesthetic freedom, as XAML richly supports deliciously detailed interactive graphics with Expression Blend and the like. My issue is the fact that other than “minimalistic white-on-black” there is no guidance, which is an issue that plagued Windows Mobile 6 and previous Windows Mobile flavors. You will have some beautiful Windows Phone 7 apps like the Foursquare app, and then you will have some apps from people who learned Silverlight and go nuts with gradients and bright, flashy, ugly designs, or who simply ported their Silverlight apps straight over to the Windows Phone 7 SDK. Aesthetic inconsistency is going to be a nuisance. This happened in Apple’s developer community, too, but you’re almost required to do this with the Windows Phone 7 SDK because the prefab tooling in the Windows Phone 7 SDK is, at least so far, incomplete. Then again, this is still just a CTP (a beta), so we don’t know how much more complete the SDK will be on its release.

The SDK comes also with a “list application” template, which demonstrates how to have that pretty list that has the 3D rotating items when you tap on them or navigate around. That was really neat, but then I found myself scratching my head wondering how to make that behavior stick across the rest of the application? Can I declare this animation/storyboard definition once and reference it with a one-line behavioral reference of some kind across all of my “navigation pages”? If so, that was not demonstrated at all, and it looked like I’d have to copy and paste everything in the templated demonstration to each and every page I’d create. We ultimately just ignored the animation code, saddened that only the “root menu” showed this animation, because it’s really not worth it to copy/paste so much code (about 50 lines of XAML) everywhere, making a mess of the markup.

No HTML DOM interaction

My other complaint is with the Silverlight 4 web browser component. It is truly wonderful that Silverlight has an integrated web browser component, and I adore Microsoft for adding it especially for the Windows Phone 7 SDK, as it’s absolutely necessary to integrate the web browser with many Internet-enabled applications. That said, however, I found no easy way of interacting with the DOM from the Silverlight CLR. In Windows’ Internet Explorer, you had to make a separate COM reference to mshtml.dll before you could interact with the DOM. I used to do this heavily. Perhaps I’m spoiled. But not having access to the document interface and, as far as I can tell, only being able to navigate and toggle scripting support really worries me that my hands will be tied here. It was, after all, Microsoft who invented dynamic HTML with fully dynamic content back in Internet Explorer 4, when Microsoft’s innovation inspired me so much that I decided to go all-out into being a web developer. That was way over a decade ago. I still want that power; XAML doesn’t always cut it.

Silverlight (and XNA) awesomesauce in a bag

I didn’t have a chance to play with XNA on WP7 much yet, but just knowing that support for XNA is there is very exciting. XNA is a fantastic platform to build games on, as it is approachable even for a beginning developer. My only complaint about XNA is that there is no virtual keyboard support. I did verify by fiddling with it that the physical keyboard seems to be exposed in the XNA API, though.

Despite my complaints about the WP7 SDK, WP7 is going to be a game changer, and a huge hit for the developer community, I’m sure of it, simply because it allows you to leverage your existing C# or VB.NET skills and prototype functional software on the device with beauty and finesse. Heck, you can even leverage Ruby skills for it. For some unknown reason, IronPython won’t run on it yet; I’m sure support for that will come before RTM, particularly given the volume of feedback for it.

But you could already?

Actually, C# developers have been able to write Windows Mobile software quickly and easily for years, albeit not with Silverlight nor with XNA. Windows Mobile has lost the market share, consumer sales is now akin to Linux’s market share on the desktop—about 2%. Microsoft is going to have to work hard—harder than they have worked so far, from what I can see—at winning over people to the Windows Phone 7 away from the iPhone, which now takes up a huge chunk of the mobile market share. They have to compete as well with the Android which already sought to compete with the iPhone, Blackberry, and Windows Mobile, and is doing one incredibly good job at doing so.

Frankly, looking at the iPhone 3GS vs. Nexus One (Google Phone) demos, I can’t help but look at the Windows Phone 7 and laugh at it. Windows Mobile 6 may have had the awful Start menu, the most ridiculous interface ever to have been put onto a mobile device, but with that stupid Start menu the platform also retained the “many apps and their icons” notion of an application playground which is strong in the iPhone and Android as well but not so easy to find in WP7. I have difficulty seeing how a marketplace would even work for WP7—will installed apps be categorized and the user would find their installed apps by category? Surely WP7 will not just throw all apps into one giant vertical list, oh good grief, please tell me no!!

Microsoft has also been alienating people who identify things based on icons rather than text. The Live applications team have demonstrated this, for example, by dropping icons completely. In so doing, Microsoft is also alienating the international communities. Sure, text can be translated, but it doesn’t make a particularly great sell when the first impressions of an interface are of gobs and gobs of Engrish text and you don’t speak Engrish. Meanwhile, many of us who do speak English still prefer attractive icons over avant-garde lower-case text headings. When you have five or so menu items in 72 point font size, it’s fine, but when you’re trying to find something among 200 others, nothing beats the simple icon. Throwing away the simple icon on a grid won’t make this reality go away; this is not like a floppy drive, it is actually essential.

WP7’s minimalistic interface could actually end up being its downfall. They’re taking a huge chance and risk. I’m hopeful, but sadly slightly doubtful, that it will be enough to throw Microsoft back into the playing field.

Starting over

Maybe I’m just not getting it. Actually, Microsoft’s new direction has had me scratching my head so much that I’m convinced that I’m still struggling to get it, and that actually all of my concerns are quite intentional for Microsoft’s part.

This video demonstrates this quite well. Microsoft wants to “start over” not just with their SDK but with deemphasizing standalone installed apps and reinforcing the notion of a phone whereby the many apps are not just active but are integrated with each other.

The only problem is, I don’t see such cross-app integration demonstrated in the SDK, at least not up front, not yet. Also, why should I as a developer be excited about this new direction if the apps—my development efforts, and yours—are deemphasized in favor of prefab packaging?

Stoked and confused

Overall, I’m both stoked and confused. This is an exciting time to delve into this new technology from Microsoft. Microsoft really needs to fill in some holes, though, and these holes are less technological than they are design.  Their design and marketplace support strategies seem very vague.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , ,

C# | Windows Phone | Silverlight

Gemli.Data v0.3.0 Released

by Jon Davis 12. October 2009 03:57

I reached a milestone this late night with The Gemli Project and decided to go ahead and release it. This release follows an earlier blog post describing some syntactical sugar that was added along with pagination and count support, here:

http://www.jondavis.net/techblog/post/2009/10/05/GemliData-Pagination-and-Counts.aspx

Remember those tests I kept noting that I still need to add to test the deep loading and deep saving functionality? No? Well I remember them. I kept procrastinating them. And as it turned out, I’ve discovered that a lesson has been taught to me several times in the past, and again now while trying to implement a few basic tests, a lesson I never realized was being taught to me until finally just this late night. The lesson goes something like this:

If in your cocky self-confidence you procrastinate the tests of a complex system because it is complex and because you are confident, you are guaranteed to watch them fail when you finally add them.

Mind you, the “system” in context is not terribly complex, but I’ll confess there was tonight a rather ugly snippet of code to wade through, particularly while Jay Leno or some other show was playing in the background to keep me distracted. On that latter note, it wasn’t until I switched to ambient music that I was finally able to fix a bug that I’d spent hours staring at.

This milestone release represents 130 total unit tests since the lifetime of the project (about 10-20 or so new tests, not sure exactly how many), not nearly enough but enough to feel a lot better about what Gemli is shaping in itself. I still yet need to add a lot more tests, but what these tests exposed was a slew of buggy relationship inferences that needed to be taken care of. I felt the urgency to release now rather than continue to add tests first because the previous release was just not suitably functional on the relationships side.

 

Use A Prefab Comparer For Case-Insensitive Dictionaries

by Jon Davis 22. September 2009 14:56

I'm kicking myself because I always just assumed that generic dictionaries in C#, i.e. Dictionary<string, object>, were case-sensitive. I always assumed that in order to have a case-insensitive dictionary, you had to do one of two things:

  1. public class CaseInsensitiveDictionary<TValue> : Dictionary<string, TValue> { ... }
  2. var item = myDictionary.Find(kvp=>kvp.Key.ToLower() == myKey.ToLower());

Actually, Dictionary<K,V> has a constructor that accepts a comparer object with which you can take advantage of prefab case-insensitive key-matching behavior.

Dictionary<string,object> myDictionary = new Dictionary<string,object>(StringComparer.InvariantCultureIgnoreCase);

I realized this upon stumbling across the MSDN documentation, which, of course, everyone reads for fun. 

http://msdn.microsoft.com/en-us/library/ms132073.aspx

Gemli Project v0.2 Released

by Jon Davis 21. September 2009 03:57

Earlier this late night I posted the first major overhaul update of Gemli since a month ago. This brings a huge number of design fixes and new features to Gemli.Data and makes it even more usable. It's still alpha, though, as there's a lot more testing to be done, not to mention still some new features to be added.

http://gemli.codeplex.com/Release/ProjectReleases.aspx?ReleaseId=33281

From the release notes:

  • Added to DataProviderBase: SupportsTransactions { get; } , BeginTransaction(), and BeginTransaction(IsolationLevel)
  • Facilitate an application default provider: Gemli.Data.Providers.ProviderDefaults.AppProvider = myDBProvider .. This allows DataModels and DataModelCollections to automatically use the application's data provider when one hasn't already been associated with the model (so you can, for example, invoke .Save() on a new model without setting its provider manually)
  • Replace (and drop) TypeConvertor with DbTypeConverter.
  • Operator overloads on DataModelQuery; you can now use == instead of IsEqualTo(), but this comes at the cost of chainability. (IsEqualTo() is still there, and still supports chaining.)
  • Added SqlDbType attribute property support. DataType attribute value is no longer a DbType and is instead a System.Type. The properties DbType and SqlDbType have been added, and setting any of these three properties will automatically translate and populate the other two.
  • DataType attribute value is no longer a DbType and is instead a System.Type. The properties DbType and SqlDbType have been added, and setting any of these three properties will automatically translate and populate the other two.
  • WhereMappedColumn is now WhereColumn
  • Facilitate optional behavior of inferring from all properties not marked as ignore, rather than the behavior or inferring from none of the unattributed properties unless no properties are attributed. The latter behavior--the old behavior--currently remains the default. The new behavior can be applied with DataModelTableMappingAttribute's PropertyLoadBehavior property
    • This may change to where the default is InferProperties.NotIgnored (the new behavior), still pondering on this.
  • Add sorting to MemoryDataProvider
  • TempMappingTable is now DataModelMap.RuntimeMappingTable
  • ForeignDataModel.ForeignXXXX was backwards and is now RelatedXXXX (multiple members)
  • DataModelFieldMappingAttribute is now DataModelColumnAttribute.
  • DataModelMap.FieldMappings is now DataModelMap.ColumnMappings.
  • DataModelTableMappingAttribute is now DataModelTableAttribute.
  • DataModelForeignKeyAttribute is now ForeignKeyAttribute.
  • The XML serialization of all elements now follows these renamings and is also now camelCased instead of ProperCase.
  • XML loading of mapping configuration is getting close, if it isn't already there. Need to test.
  • Added DataModelMap.LoadMappings(filePath)

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

C# | Cool Tools | Pet Projects | Software Development

Gemli.Data Runs On Mono (So Far)

by Jon Davis 20. September 2009 00:44

I was sitting at my Mac Mini to remote into my Windows 7 machine and I had this MonoDevelop 2.0 icon sitting there (I had installed it a couple days ago just to see it install and run) and I thought I should give Gemli.Data a quick, simple test run.

Screen shot 2009-09-20 at 12.38.02 AM

(Click to view)

Notice the “Application output” window. It werkie!!

I was concerned that Mono hadn’t been flushed out well enough yet to be stable with Gemli.Data, I do a lot of reflection-heavy stuff with it to get it to infer things, not to mention System.Data dependency stuff. But it seems to work fine.

This isn’t by any means a real test, more of a quick and dirty mini smoke test. I was very happy to see that it works fine, and my confidence in Mono just went up a notch.

Not shown in the screen shot, I expanded it a little tiny bit to have multiple records, then I added sorting in the query. Apparently Gemli doesn't have sort support in the MemoryDataProvider yet so I used LINQ-to-Objects. Yes, LINQ syntax worked in Mono, too! Nifty!


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  January 2019  >>
MoTuWeThFrSaSu
31123456
78910111213
14151617181920
21222324252627
28293031123
45678910

View posts in large calendar