A Consistent Approach To Client-Side Cache Invalidation

by Jon Davis 10. August 2013 17:40

Download the source code for this blog entry here: ClientSideCacheInvalidation.zip

TL;DR?

Please scroll down to the bottom of this article to review the summary.

I ran into a problem not long ago where some JSON results from an AJAX call to an ASP.NET MVC JsonResult action were being cached by the browser, quite intentionally by design, but were no longer up-to-date, and without devising a new approach to route manipulation or any of the other fundamental infrastructural designs for the endpoints (because there were too many) our hands were tied. The caching was being done using the ASP.NET OutputCacheAttribute on the action being invoked in the AJAX call, something like this (not really, but this briefly demonstrates caching):

[OutputCache(Duration = 300)]
public JsonResult GetData()
{
return Json(new
{
LastModified = DateTime.Now.ToString()
}, JsonRequestBehavior.AllowGet);
}
@model dynamic
@{
ViewBag.Title = "Home";
}
<h2>Home</h2>
<div id="results"></div>
<div><button id="reload">Reload</button></div>
@section scripts {
<script>
var $APPROOT = "@Url.Content("~/")";
$.getJSON($APPROOT + "Home/GetData", function (o) {
$('#results').text("Last modified: " + o.LastModified);
});
$('#reload').on('click', function() {
window.location.reload();
});
</script>
}

Since we were using a generalized approach to output caching (as we should), I knew that any solution to this problem should also be generalized. My first thought was in the mistaken assumption that the default [OutputCache] behavior was to rely on client-side caching, since client-side caching was what I was observing while using Fiddler. (Mind you, in the above sample this is not the case, it is actually server-side, but this is probably because of the amount of data being transferred. I’ll explain after I explain what I did in my false assumption.)

Microsoft’s default convention for implementing cache invalidation is to rely on “VaryBy..” semantics, such as varying the route parameters. That is great except that the route and parameters were currently not changing in our implementation.

So, my initial proposal was to force the caching to be done on the server instead of on the client, and to invalidate when appropriate.

 

public JsonResult DoSomething()
{
// 
// Do something here that has a side-effect
// of making the cached data stale
// 
Response.RemoveOutputCacheItem(Url.Action("GetData"));
return Json("OK");
}
[OutputCache(Duration = 300, Location = OutputCacheLocation.Server)]
public JsonResult GetData()
{
return Json(new
{
LastModified = DateTime.Now.ToString()
}, JsonRequestBehavior.AllowGet);
}

 

 

<button id="invalidate">Invalidate</button></div>

 

 

$('#invalidate').on('click', function() {
$.post($APPROOT + "Home/DoSomething", null, function(o) {
window.location.reload();
}, 'json');
});

 

image
While Reload has no effect on the Last modified value, the
Invalidate button causes the date to increment.

When testing, this actually worked quite well. But concerns were raised about the payload of memory on the server. Personally I think the memory payload in practically any server-side caching is negligible, certainly if it is small enough that it would be transmitted over the wire to a client, so long as it is measured in kilobytes or tens of kilobytes and not megabytes. I think the real concern is that transmission; the point of caching is to make the user experience as smooth and seamless as possible with minimal waiting, so if the user is waiting for a (cached) payload, while it may be much faster than the time taken to recalculate or re-acquire the data, it is still measurably slower than relying on browser cache.

The default implementation of OutputCacheAttribute is actually OutputCacheLocation.Any. This indicates that the cached item can be cached on the client, on a proxy server, or on the web server. From my tests, for tiny payloads, the behavior seemed to be caching on the server and no caching on the client; for a large payload from GET requests with querystring parameters seemed to be caching on the client but with an HTTP query with an “If-Modified-Since” header, resulting in a 304 Not Modified on the server (indicating it was also cached on the server but verified by the server that the client’s cache remains valid); and for a large payload from GET requests with all parameters in the path, the behavior seemed to be caching on the client without any validation checking from the client (no HTTP request for an If-Modified-Since check). Now, to be quite honest I am only guessing that these were the distinguishing factors of these behavior observations. Honestly, I saw variations of these behaviors happening all over the place as I tinkered with scenarios, and this was the initial pattern I felt I was observing.

At any rate, for our purposes we were currently stuck with relying on “Any” as the location, which in theory would remove server-side caching if the server ran short on RAM (in theory, I don’t know, although the truth can probably be researched, which I don’t have time to get into). The point of all this is, we have client-side caching that we cannot get away from.

So, how do you invalidate the client-side cache? Technically, you really can’t. The browser controls the cache bucket and no browsers provide hooks into the cache to invalidate them. But we can get smart about this, and work around the problem, by bypassing the cached data. Cached HTTP results are stored on the basis of varying by the full raw URL on HTTP GET methods, they are cached with an expiration (in the above sample’s case, 300 seconds, or 5 minutes), and are only cached if allowed to be cached in the first place as per the HTTP header directives in the HTTP response. So, to bypass the cache you don’t cache, or you need to know up front how long the cache should remain until it expires—neither of these being acceptable in a dynamic application—or you need to use POST instead of GET, or you need to vary up the URL.

Microsoft originally got around the caching problem in ASP.NET 1.x by forcing the “normal” development cycle in the lifecycle of <form> tags that always used the POST method over HTTP. Responses from POST requests are never cached. But POSTing is not clean as it does not follow the semantics of the verbiage if nothing is being sent up and data is only being retrieved.

You can also use ETag in the HTTP headers, which isn’t particularly helpful in a dynamic application as it is no different from a URL + expiration policy.

To summarize, to control cache:

  • Disable caching from the server in the Response header (Pragma: no-cache)
  • Predict the lifetime of the content and use an expiration policy
  • Use POST not GET
  • Etag
  • Vary the URL (case-sensitive)

Given our options, we need to vary up the URL. There a number of approaches to this, but almost all of the approaches involve relying on appending or modifying the querystring with parameters that are expected to be ignored by the server.

$.getJSON($APPROOT + "Home/GetData?_="+Date.now(), function (o) {
$('#results').text("Last modified: " + o.LastModified);
});

In this sample, the URL is appended with “?_=”+Date.now(), resulting in this URL in the GET:

/Home/GetData?_=1376170287015

This technique is often referred to as cache-busting. (And if you’re reading this blog article, you’re probably rolling your eyes. “Duh.”) jQuery inherently supports cache-busting, but it does not do it on its own from $.getJSON(), it only does it in $.ajax() when the options parameter includes {cache: false}, unless you invoke $.ajaxSetup({ cache: false }); first to disable all caching. Otherwise, for $.getJSON() you would have to do it manually by appending the URL. (Alright, you can stop rolling your eyes at me now, I’m just trying to be thorough here..)

This is not our complete solution. We have a couple problems we still have to solve.

First of all, in a complex client codebase, hacking at the URL from application logic might not be the most appropriate approach. Consider if you’re using Backbone.js with routes that synchronize objects to and from the server. It would be inappropriate to modify the routes themselves just for cache invalidation. A more generalized cache invalidation technique needs to be implemented in the XHR-invoking AJAX function itself. The approach in doing this will depend upon your Javascript libraries you are using, but, for example, if jQuery.getJSON() is being used in application code, then jQuery.getJSON itself could perhaps be replaced with an invalidation routine.

var gj = $.getJSON;
$.getJSON = function (url, data, callback) {
url = invalidateCacheIfAppropriate(url); // todo: implement something like this
return gj.call(this, url, data, callback);
};

This is unconventional and probably a bad example since you’re hacking at a third party library, a better approach might be to wrap the invocation of $.getJSON() with an application function.

var getJSONWrapper = function (url, data, callback) {
url = invalidateCacheIfAppropriate(url); // todo: implement something like this
return $.getJSON(url, data, callback);
};

And from this point on, instead of invoking $.getJSON() in application code, you would invoke getJSONWrapper, in this example.

The second problem we still need to solve is that the invalidation of cached data that derived from the server needs to be triggered by the server because it is the server, not the client, that knows that client cached data is no longer up-to-date. Depending on the application, the client logic might just know by keeping track of what server endpoints it is touching, but it might not! Besides, a server endpoint might have conditional invalidation triggers; the data might be stale given specific conditions that only the server may know and perhaps only upon some calculation. In other words, invalidation needs to be pushed by the server.

One brute force, burdensome, and perhaps a little crazy approach to this might be to use actual “push technology”, formerly “Comet” or “long-polling”, now WebSockets, implemented perhaps with ASP.NET SignalR, where a connection is maintained between the client and the server and the server then has this open socket that can push invalidation flags to the client.

We had no need for that level of integration and you probably don’t either, I just wanted to mention it because it might come back as food for thought for a related solution. One scenario I suppose where this might be useful is if another user of the web application has caused the invalidation, in which case the current user will not be in the request/response cycle to acquire the invalidation flag. Otherwise, it is perhaps a reasonable assumption that invalidation is only needed, and only triggered, in the context of a user’s own session. If not, perhaps it is a “good enough” assumption even if it is sometimes not true. The expiration policy can be set low enough that a reasonable compromise can be made between the current user’s changes and changes invoked by other systems or other users.

While we may not know what server endpoint might introduce the invalidation of client cache data, we could assume that the invalidation will be triggered by any server endpoint(s), and build invalidation trigger logic on the response of server HTTP responses.

To begin implementing some sort of invalidation trigger on the server I could flag invalidations to the client using HTTP header(s).

public JsonResult DoSomething()
{
//
// Do something here that has a side-effect
// of making the cached data stale
//
InvalidateCacheItem(Url.Action("GetData"));
return Json("OK");
}
public void InvalidateCacheItem(string url)
{
Response.RemoveOutputCacheItem(url); // invalidate on server
Response.AddHeader("X-Invalidate-Cache-Item", url); // invalidate on client
}
[OutputCache(Duration = 300)]
public JsonResult GetData()
{
return Json(new
{
LastModified = DateTime.Now.ToString()
}, JsonRequestBehavior.AllowGet);
}

At this point, the server is emitting a trigger to the HTTP client that says that “as a result of a recent operation, that other URL, the one for GetData, is no longer valid for your current cache, if you have one”. The header alone can be handled by different client implementations (or proxies) in different ways. I didn’t come across any “standard” HTTP response that does this “officially”, so I’ll come up with a convention here.

image

Now we need to handle this on the client.

Before I do anything first of all I need to refactor the existing AJAX functionality on the client so that instead of using $.getJSON, I might use $.ajax or some other flexible XHR handler, and wrap it all in custom functions such as httpGET()/httpPOST() and handleResponse().

var httpGET = function(url, data, callback) {
return httpAction(url, data, callback, "GET");
};
var httpPOST = function (url, data, callback) {
return httpAction(url, data, callback, "POST");
};
var httpAction = function(url, data, callback, method) {
url = cachebust(url);
if (typeof(data) === "function") {
callback = data;
data = null;
}
$.ajax(url, {
data: data,
type: "GET",
success: function(responsedata, status, xhr) {
handleResponse(responsedata, status, xhr, callback);
}
});
};
var handleResponse = function (data, status, xhr, callback) {
handleInvalidationFlags(xhr);
callback.call(this, data, status, xhr);
};
function handleInvalidationFlags(xhr) {
// not yet implemented
};
function cachebust(url) {
// not yet implemented
return url;
};
// application logic
httpGET($APPROOT + "Home/GetData", function(o) {
$('#results').text("Last modified: " + o.LastModified);
});
$('#reload').on('click', function() {
window.location.reload();
});
$('#invalidate').on('click', function() {
httpPOST($APPROOT + "Home/Invalidate", function (o) {
window.location.reload();
});
});

At this point we’re not doing anything yet, we’ve just broken up the HTTP/XHR functionality into wrapper functions that we can now modify to manipulate the request and to deal with the invalidation flag in the response. Now all our work will be in handleInvalidationFlags() for capturing that new header we just emitted from the server, and cachebust() for hijacking the URLs of future requests.

To deal with the invalidation flag in the response, we need to detect that the header is there, and add the cached item to a cached data set that can be stored locally in the browser with web storage. The best place to put this cached data set is in sessionStorage, which is supported by all current browsers. Putting it in a session cookie (a cookie with no expiration flag) works but is less ideal because it adds to the payload of all HTTP requests. Putting it in localStorage is less ideal because we do want the invalidation flag(s) to go away when the browser session ends, because that’s when the original browser cache will expire anyway. There is one caveat to sessionStorage: if a user opens a new tab or window, the browser will drop the sessionStorage in that new tab or window, but may reuse the browser cache. The only workaround I know of at the moment is to use localStorage (permanently retaining the invalidation flags) or a session cookie. In our case, we used a session cookie.

Note also that IIS is case-insensitive on URI paths, but HTTP itself is not, and therefore browser caches will not be. We will need to ignore case when matching URLs with cache invalidation flags.

Here is a more or less complete client-side implementation that seems to work in my initial test for this blog entry.

function handleInvalidationFlags(xhr) {
// capture HTTP header
var invalidatedItemsHeader = xhr.getResponseHeader("X-Invalidate-Cache-Item");
if (!invalidatedItemsHeader) return;
invalidatedItemsHeader = invalidatedItemsHeader.split(';');
// get invalidation flags from session storage
var invalidatedItems = sessionStorage.getItem("invalidated-cache-items");
invalidatedItems = invalidatedItems ? JSON.parse(invalidatedItems) : {};
// update invalidation flags data set
for (var i in invalidatedItemsHeader) {
invalidatedItems[prepurl(invalidatedItemsHeader[i])] = Date.now();
}
// store revised invalidation flags data set back into session storage
sessionStorage.setItem("invalidated-cache-items", JSON.stringify(invalidatedItems));
}
// since we're using IIS/ASP.NET which ignores case on the path, we need a function to force lower-case on the path
function prepurl(u) {
return u.split('?')[0].toLowerCase() + (u.indexOf("?") > -1 ? "?" + u.split('?')[1] : "");
}
function cachebust(url) {
// get invalidation flags from session storage
var invalidatedItems = sessionStorage.getItem("invalidated-cache-items");
invalidatedItems = invalidatedItems ? JSON.parse(invalidatedItems) : {};
// if item match, return concatonated URL
var invalidated = invalidatedItems[prepurl(url)];
if (invalidated) {
return url + (url.indexOf("?") > -1 ? "&" : "?") + "_nocache=" + invalidated;
}
// no match; return unmodified
return url;
}

Note that the date/time value of when the invalidation occurred is permanently stored as the concatenation value. This allows the data to remain cached, just updated to that point in time. If invalidation occurs again, that concatenation value is revised to the new date/time.

Running this now, after invalidation is triggered by the server, the subsequent request of data is appended with a cache-buster querystring field.

image

 

In Summary, ..

.. a consistent approach to client-side cache invalidation triggered by the server might be by following these steps.

  1. Use X-Invalidate-Cache-Item as an HTTP response header to flag potentially cached URLs as expired. You might consider using a semicolon-delimited response to list multiple items. (Do not URI-encode the semicolon when using it as a URI list delimiter.) Semicolon is a reserved/invalid character in URI and is a valid delimiter in HTTP headers, so this is valid.
  2. Someday, browsers might support this HTTP response header by automatically invalidating browser cache items declared in this header, which would be awesome. In the mean time ...
  3. Capture these flags on the client into a data set, and store the data set into session storage in the format:
    		{
    	"http://url.com/route/action": (date_value_of_invalidation_flag),
    	"http://url.com/route/action/2": (date_value_of_invalidation_flag)
    	}
    	
  4. Hijack all XHR requests so that the URL is appropriately appended with cachebusting querystring parameter if the URL was found in the invalidation flags data set, i.e. http://url.com/route/action becomes something like http://url.com/route/action?_nocache=(date_value_of_invalidation_flag), being sure to hijack only the XHR request and not any logic that generated the URL in the first place.
  5. Remember that IIS and ASP.NET by default convention ignore case (“/Route/Action” == “/route/action”) on the path, but the HTTP specification does not and therefore the browser cache bucket will not ignore case. Force all URL checks for invalidation flags to be case-insensitive to the left of the querystring (if there is a querystring, otherwise for the entire URL).
  6. Make sure the AJAX requests’ querystring parameters are in consistent order. Changing the sequential order of parameters may be handled the same on the server but will be cached differently on the client.
  7. These steps are for “pull”-based XHR-driven invalidation flags being pulled from the server via XHR. For “push”-based invalidation triggered by the server, consider using something like a SignalR channel or hub to maintain an open channel of communication using WebSockets or long polling. Server application logic can then invoke this channel or hub to send an invalidation flag to the client or to all clients.
  8. On the client side, an invalidation flag “push” triggered in #7 above, for which #1 and #2 above would no longer apply, can still utilize #3 through #6.

You can download the project I used for this blog entry here: ClientSideCacheInvalidation.zip

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

ASP.NET | C# | Javascript | Techniques | Web Development

ASP.NET MVC 4: Where Have All The Global.asax Routes Gone?

by Jon Davis 23. June 2012 03:03

I ran into this a few days back and had been meaning to blog about it, so here it finally is while it’s still interesting information.

In ASP.NET MVC 1.0, 2.0, and 3.0, routes are defined in the Global.asax.cs file in a method called RegisterRoutes(..).

mvc3_register_routes

It had become an almost unconscious navigate-and-click routine for me to open Global.asax.cs up to diagnose routing errors and to introduce new routes. So upon starting a new ASP.NET MVC 4 application with Visual Studio 11 RC (or Visual Studio 2012 RC, whichever it will be called), it took me by surprise to find that the RegisterRoutes method is no longer defined there. In fact, the MvcApplication class defined Global.asax.cs contains only 8 lines of code! I panicked when I saw this. Where do I edit my routes?!

mvc4_globalasax

What kept me befuddled for far too long (quite a bit longer than a couple seconds, shame on me!) was the fact that these lines of code, when not actually read and only glanced at, look similar to the Application_Start() from the previous iteration of ASP.NET MVC:

mvc3_globalasax

Eventually I squinted and paid closer attention to the difference, and then I realized that the RegisterRoutes(..) method is being invoked still but it is managed in a separate configuration class. Is this class an application settings class? Is it a POCO class? A wrapper class for a web.config setting? Before I knew it I was already right-clicking on RegisterRoutes and choosing Go To Definition ..

mvc4_globalasax_gotodef

Under Tools –> Options –> Projects and Solutions –> General I have Track Active Item in Solution Explorer enabled, so upon right-clicking an object member reference in code and choosing “Go To Definition” I always glance over at Solution Explorer to see where it navigates to in the tree. This is where I immediately found the new config files:

mvc4_app_start_solex

.. in a new App_Start folder, which contains FilterConfig.cs, RouteConfig.cs, and BundleConfig.cs, as named by the invoking code in Global.asax.cs. And to answer my own question, these are POCO classes, each with a static method (i.e. RegisterRoutes).

I like this change. It’s a minor refactoring that cleans up code. I don’t understand the naming convention of App_Start, though. It seems like it should be called “Config” or something, or else Global.asax.cs should be moved into App_Start as well since Application_Start() lives in Global.asax.cs. But whatever. Maintaining configuration details in one big Global.asax.cs file gets to be a bit of a pain sometimes especially in growing projects so I’m very glad that such configuration details are now tucked away in their own dedicated spaces.

I am curious but have not yet checked to determine whether App_Start as a new ASP.NET folder has any inherent behaviors associated with it, such as for example post-edit auto-compilation. I’m doubtful.

In future blog post(s), perhaps my next post, I’ll go over some of the other changes in ASP.NET MVC 4.

Currently rated 3.8 by 26 people

  • Currently 3.769231/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

ASP.NET | Web Development

Changes Are Coming

by Jon Davis 6. July 2011 23:30

Well, it's been a wonderful ride, nearly half a decade working with BlogEngine.net's great blogging software. But it's time to move on.

Orchard, it was very nice to meet you. You have a wonderful future ahead of you, and I was honored to have known you, even just a little. Unfortunately, you and I are each looking for something different. 

WordPress, you are like a beautiful, sexy whore, tantalizing on the outside and known by everybody and his brother, but quite honestly I'm not sure I want to see you naked more than I already have.

I'm frickin' Jon Davis, I've been doing software and web development for 14 nearly 15 years now, and doggonit I should assume myself to be "all that" by now. Actually, blog engines should be like "Hello World" to me by now. I suppose the only reason why I've been too shy to do it thus far is because the first time I started building a complete blogging solution eight or nine years ago and stopped its continuance six or seven years ago the thing I built was proven to be an oddball hunk of an over-programmed desktop application that I had primarily leveraged to grow broad technical talents. It was a learning opportunity, not a proper blogging solution, and it smelled of adolescence. (To this day it won't even compile because .NET 2.0 broke it.)

In the mean time, I've moved on. I've been an employer-focused career guy for the last five or six years, having little time for major projects like that, but still growing both in technical skill set and in broad understanding of Internet markets and culture.

But I kind of miss blogging. I used to be a prolific blogger. I sometimes browse my blog posts from years ago and find some interesting tidbits of knowledge, in fact sometimes I actually learn from my prior writings because I later forget the things I had learned and blogged about but come back to re-learn them. Sometimes, meanwhile, I'll find some blog posts that are a little bizarre--thoughtful in prose, yet ridiculous in their findings. That's okay. My goal is to get myself to think again, and not be continuously caught up in a daily grind whereby neither my career nor technically-minded side life have any meaning.

Last weekend over two or three days I created a new blog engine. (Anyone who knows me well knows that I've been tinkering with social platform development on my own time for some years, but this one was from-scratch.) I successfully ported all of my blog posts and blog comments from my BlogEngine.net to my new engine and got it to render in blog form using my own NUnit-tested ASP.NET MVC implementation. I would have replaced BlogEngine.net here on my site with my blog engine already, were it not for the fact that as I used Entity Framework Code First I ran into snags getting the generated database schema to correctly align with long-term strategies. And as much as I'd be delighted to prove out my ability to rush a new blog engine out the door, I don't necessarily want to rush a database schema, especially if I intend to someday share the schema and codebase with the world.

And I never said I was going to open-source this. I might, but I also want to commercialize it as a hosted service. I'll likely do both.

But it's coming, and here are my dreamy if possibly ridiculous plans for it:

 

  1. Blogging with comments and image attachments. Nothing special here. But I want to support using the old-skool MetaWeblog API, so that'll definitely be there, as well as the somewhat newer AtomPub protocol.
  2. Syndication with RSS and Atom. Again, nothing special here.
  3. As a blogging framework it will be a WebMatrix-ready web site (not web application). Even though it will use ASP.NET MVC it will be WebMatrix gallery-loadable and Notepad-customizable. The controllers/models will just be precompiled. Note that this is already working and proven out; the depth and detail of customizability (such as a good file management pattern for having multiple themes preinstalled) have not been sorted out yet, though.
  4. AppHarbor-deployable. AppHarbor is awesome! Everything I'm doing here is going to ultimately target AppHarbor. Right now the blog you're looking at is temporarily hosted on a private server, but I want that to end soon as this server is flaky.
  5. Down-scalable. I am prototyping this with SQL Server Compact Edition 4.0, with no stored procedures. Once the project begins to mature, I'll start supporting optimizations for upwards-scalable platforms like SQL Server with optimized stored procedures, etc., but for now flexibility for the little guy who's coming from WordPress to my little blog engine is the focus.
  6. Phase 1 goal: BlogEngine.net v1.4.5.0 approximate feature equivalence (minus prefab templates and extra features I don't use). BlogEngine.net is currently at v2.x now, and I haven't really even looked much at v2.x yet, but as of this blog post I'm currently still using v1.4.5.0 and rather than upgrade I just want to swap it out with something of my own that does roughly the same as what BlogEngine.net does. This includes commenting, categories, widgets, a solid blog editor, and strong themeability; I won't be creating a lot of prefab themes, but if I'm going to produce something of my own I want to expose at least the compiled parts of it to others to reuse, and I'm extremely picky about cleanliness of templates such that they can be easily updated and CSS swappages with minimal server-side code changes can go very far.
  7. Phase 2 goal: Tumblr approximate feature equivalence (minus prefab templates). All I mean by this is that blog posts won't just be blog posts, they'll be content item posts of various forms--blog posts, microblog posts, photo posts, video posts, etc. Still browsable sorted descending by date, but the content type is swappable. In my current implementation, a blog entry is just a custom content type, and blogs are declared in an isolated class library from the core content engine. I also want to support importing feeds from other sources, such as RSS feeds from Flickr or YouTube. Tumblr approximate equivalence also means being mobile-ready. Tumblr is a very smartphone-friendly service, and this is going to be a huge area of focus.
  8. Phase 3 goal: WordPress approximate equivalence (minus prefab templates). Yeah I know, to suggest WordPress equivalence after already baking in something of a BlogEngine.net and Tumblr functionality equivalence, this is sorta-kinda a step backwards on the content engine side. But it's a huge step forward in these areas:
    • Elegance in administration / management .. the blogger has to live there, after all
    • Configurability - WordPress has a lot of custom options, making it really a blog on steroids
    • Modularity - rich support for plug-ins or "modules" so that whether or not many people use this thing, whoever does use it can take advantage of its extensibility
    • Richer themeability - WordPress themes are far from CSS drops, they are practically engine replacements, but that is as much its beauty as it is its shortcoming. You can make it what you want, really, by swapping out the theme.
Non-goals include creating a full-on CMS. I have no interest in trying to build something that competes directly with Orchard, and frankly I think Orchard's real goals are already met with Umbraco which is a fantastic CMS. But Umbraco is nothing like WordPress, WP is really just a glorified blog engine. If anything, I want to compete a little bit with WordPress. And I do think I can compete with WordPress better than Orchard does; even though Orchard seems to be trying to do just that (compete with WordPress), its implementation goals are more in line with Umbraco and those goals are just not compatible because WordPress is a very focused kind of application with a very specific kind of content management.
 
And don't worry, I don't ever actually think I could ever literally compete with WordPress as if to produce something better. I for one strongly believe that it's completely okay to go and build yet another mousetrap, even if mine is of lesser ideals compared to the status quo. There's nothing wrong with doing that. People use what they want to use, and I don't like the LAMP stack nor PHP all that much, otherwise I'd readily embrace WordPress. Then again, I'd probably still create my own WordPress after embracing WordPress, perhaps just like I am going to create my own BlogEngine.net after embracing BlogEngine.net.

 

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

ASP.NET | Blog | Pet Projects

A Platform's People Must Be As Pliable As Its Tech

by Jon Davis 18. June 2011 12:57

I haven't blogged for a while because this blog is still hosted on a server that hasn't received payment since my payment subscription ended over a month ago, and I don't want to just set up the same old blog software on the new host I've already arranged. EDIT: Meh I've decided to build my own blogging/social engine and I'm doing this during my free time. 

So remember a while back I mentioned I'd move this blog to Orchard?

It's not that I haven't gotten around to setting up Orchard yet. It's that I am having a huge amount of difficulty embracing it. Besides the fact that it has an incredibly steep learning curve--and then still v1.x level capability once the knowledge plateaus--there is something critically wrong with Orchard that I'm still getting past, and that is the size and attitude of its current community.

A good blogging platform--CMS, whatever you want to call it--will be "hackable" to conform to the proprietary needs of its user. On the technical side, Orchard is perfect for this because it is "just an ASP.NET MVC application", and they tout it as such, up until they actually see ASP.NET MVC developers tackling and hacking it. And yes I can take Orchard's source code and hack at it to meet whatever requirements I have. Personally, I think that this is the only real-world scenario for Orchard anyway simply because it's still v1.x and is still painfully lacking in feature detail.

But its community so far consists of a very small handful of people, and several of them are paid--whether directly or indirectly--by Microsoft. So there has proven to be a very large wedge between those who actually know the platform inside and out and those who, like me, are trying to learn the platform--very few who know the platform well from a development standpoint but are just end users, because Orchard is still so new--and the wedge between these two groups is not just one of knowledge but also of culture and attitude. Those who are maintaining Orchard are insistent that people stay in the box that Orchard constructs. This is so completely not the attitude of a typical open source software developer. Most OSS developers see breaking out of a prebuilt box--especially when the box is still a 1.x "greenfield" project--as a huge and wonderful challenge worth conquering. "Going rogue" with the platform is something that is often embraced by a platform because it highlights the pliability of the platform or else reveals opportunities for improvement.

And I would play my part in seeing this "box breakout" opportunity to be a challenge worth conquering, but every time I sit down to continue to learn the platform and understand its limitations and possibly some seam points I might introduce, I have to ask myself, What's the payoff besides getting my site set up the way I want it? Because I'll admit, my biggest motivation to delve into Orchard is to be an expert in Orchard and to be a useful participating member of its community, so this is not just an investment in the technology but in the community.

So then I ask myself, is the community worth investing in? It is if I'm willing to be the only rogue participant trying to figure out how to break out of the originally intended design and just use the parts of it that I want to use but still ultimately have my own ASP.NET MVC site with an Orchard back-end in certain places. But I'm not willing to be the only rogue. So that leaves me looking like a jerk calling these guys jerks, because they freak out at the notion of using Orchard in a way for which it was not originally intended. (They're not jerks, by the way, and neither am I, I'm just saying it looks something like me being a jerke calling them jerks because tensions rise and a lot of bickering occurs when they can't get past overall intentions and strategy while I'm trying to address a specific technical scenario.)

So perhaps Orchard is not what I want to use, I'm still unsure, and believe me I have pondered just writing my own blog engine (again), but let me be clear: it is not because the Orchard technology isn't a good fit for me, rather it is because the people aren't a good fit for me. They are delivering a product that they think should be used as-is with their own proprietary methods of extending and tweaking (rather than ASP.NET MVC methods of extending and tweaking which again ASP.NET tweaks are in my opinion completely appropriate) and they don't want to help people twist it around and see their hard work made "ugly" or for that matter "insufficient". It's only human to be protective of your investment and its public image, but it's just not a realistic attitude to have when you're still a young 1.x platform less than a year old that still needs to build up a strong community and documentation is still lacking.

Meanwhile, the more I look at WordPress the more I like it. Its core technology isn't much for an ASP.NET developer to look at, but everything else about WordPress is really growing on me, I'm tempted to call it "wonderful". I could sit here and talk about WordPress's wonderfulness for some time but the reality is that everyone who would read this tech blog already knows all about WordPress and if not then just go create a basic WordPress blog yourself, it's free.

In fact, I was working on setting up another site for a friend this week and I used WordPress to do it, and after weeks of frustrations with Orchard and its bland 1.x workflow I must confess that I really just about lost it when I worked in WordPress, it is really just such a nice experience. Granted, the site is actually just a plain-vanilla WordPress.com site (not a Wordpress.org full deployment) so it's not like I had a lot of customizations I could do here that I wasn't able to do or figure out in Orchard, so much as the beautiful experience of working with WordPress itself and the rich configurability of WordPress made the disappointment of WP's limited customization flexibility all just wash away.

I can't see myself becoming a PHP programmer consultant touting WordPress, I've had a couple false starts, but the notion has been on the table for nearly a year now, and after trying (albeit not very hard) to work with Orchard I don't think I can get around it.

The best part about WordPress is that it can run in .NET using Phalanger. And I'm seriously considering toying with the idea of abandoning Orchard for WordPress-on-Phalanger-on-.NET.

 

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

ASP.NET

Quick Tip: Use ASP.NET Templating To Generate E-mail Bodies Or Other Templated Things

by Jon Davis 10. October 2008 22:38

I've been in at least three jobs now where a significant amount of effort by others on the team was spent performing e-mail blasts to thousands or tens of thousands of [opted-in] users, whether for newsletters or for e-mail advertising campaigns. And I've also had countless encounters of requirements of a web site to send an e-mail to a single user, such as upon registration or user-to-user communications via web-based PM interfaces.

It has been surprising to me to discover how insistent development teams are with using alternative templating techniques to generate the pre-formatted e-mails, everything from:

  • .txt files with variable markers like {{USERNAME}}
  • NVelocity templates
  • Database records with text BLOBs and variable markers like %%USERNAME%%
  • C#-written code with lots and lots of myStringBuilder.Append()'s.

(I do like the database text BLOBs if only for simple plain-text formatted e-mails, as this sort of thing allows non-developer system administrators to customize their templates on the fly.)

Yet, never do people take advantage of the templating system they're already using. It's called ASP.NET.

Why would you use ASP.NET? Actually, in a web context, why wouldn't you might be the fairer question, because reasons for using it are obvious:

  • ASP.NET is one of the most powerful and versatile templating engines in use today. Why anyone would use NVelocity in a world where there's ASP.NET is beyond me.
  • You can take advantage of the full .NET Framework, web services, HttpContext features, and more.
  • You can data-bind to the database and output the same user-customized Rich HTML as you could with normal web pages.
  • You can add repeaters and other multi-dimensional variables that are not possible with flat templates.

Cons:

  • Requires the use of an insanely powerful service that is likely already running to support such things. (That would be IIS.)

Say, for instance, you're invoking it from a web page that has just processed some database transaction and you need to e-mail the user to inform him of a status or reminder or something. You can use Server.Execute() to invoke the ASP.NET template, which will read the results back into a Stream object. Pass a MemoryStream into it as a parameter, then flush that out into a StreamReader to read it into a System.String. Easy! What about context and state? All the more reason to use Server.Execute and not System.Net.WebClient. You can retain state, although in any case you can also build up a lengthy query string in your URL to be invoked, and then on the template you can perform whatever context synchronization that needs to be performed.

Here's a sample that demonstrates how (but not why).

Default.aspx:

<%@ Page Language="C#" AutoEventWireup="true"  CodeFile="Default.aspx.cs" Inherits="_Default" %> 

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title></title>
</head>
<body>
    <form id="form1" runat="server">
    <div>
        <%=TemplateOutput %>
    </div>
    </form>
</body>
</html> 

Default.aspx.cs:

protected string TemplateOutput
{
    get
    {

        MemoryStream ms = new MemoryStream();
        StreamWriter sw = new StreamWriter(ms);
        Context.Items["Question"] =

            "Is this a working template?";

        Server.Execute("Template.aspx", sw, true);
        sw.Flush();
        ms.Position = 0;
        StreamReader sr = new StreamReader(ms);
        return sr.ReadToEnd();
    }
}

Template.aspx:

<%@ Page Language="C#" %>

<%= Context.Items["Question"].ToString()
    .Replace("Is this", "This is").Replace("?", "!") %> 

Output:

This is a working template!

I hope this was helpful.

kick it on DotNetKicks.com

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , ,

Web Development | ASP.NET

Keys To Web 3.0 Design and Development When Using ASP.NET

by Jon Davis 9. October 2008 05:45

You can skip the following boring story as it's only a prelude to the meat of this post.

As I've been sitting at my job lately trying to pull off my web development ninja skillz I feel like my hands tied behind my back because I'm there temporarily as a consultant to add features, not to refactor. The current task at hand involves adding a couple additional properties to key user component in a rich web application. This requires a couple extra database columns and a bit of HTML interaction to collect the new settings. All in all, about 15 minutes, right? Slap in the columns into the database, update the SQL SELECT query, throw on a couple ASP.NET controls, add some data binding, and you're done, right? Surely not more than an hour, right?

Try three hours, just to add the columns to the database! The HTML is driven by a data "business object" that isn't a business object at all, just a data layer that has method stubs for invoking stored procedures and returns only DataTables. There are four types of "objects" based on the table being modified, and each type has its own stored procedure that ultimately proxies out to the base type's stored procedure, so that means at least five stored procedures for each CRUD operation affected by the addition. Overall, about 10 database objects were touched and as many C# data layer objects as well. Add to that a proprietary XML file that is used to map these data objects' DataTable columns, both in (parameters) and out (fields).

That's just the data. Then on the ASP.NET side, to manage event properties there's a control that's inheriting another control that is contained by another control that is contained by two other controls before it finally shows up on the page. Changes to the properties are a mix of hard-wired bindings to the lowest base control (properties) for some of the user's settings, and for most of the rest of the user's settings on the same page, CLR events (event args) are raised by the controls and are captured by the page that contains it all. There are at least five different events, one for each "section" of properties. To top it off, in my shame, I added both another "SaveXXX" event, plus I added another way of passing the data--I use a series of FindControl(..) invocation chains to get to the buried control and fetch the setting I wanted to add to the database and/or translate back out to the view. (I would have done better than to add more kludge, but I couldn't without being enticed to refactor, which I couldn't do, it's a temporary contract and the boss insisted that I not.)

To top it all off, just the simple CRUD stored procedures alone are slower than an eye blink, and seemingly showstopping in code. It takes about five seconds to handle each postback on this page, and I'm running locally (with a networked SQL Server instance).

The guys who architected all this are long gone. This wasn't the first time I've been baffled by the output of an architect who tries too hard to do the architectural deed while forgetting that his job is not only to be declarative on all layers but also to balance it with performance and making the developers' lives less complicated. In order for the team to be agile, the code must be easily adaptable.

Plus the machine I was given is, just like everyone else's, a cheap Dell with 2GB RAM and a 17" LCD monitor. (At my last job, which I quit, I had a 30-inch monitor and 4GB RAM which I replaced without permission and on my own whim with 8GB.) I frequently get OutOfMemoryExceptions from Visual Studio when trying to simply compile the code.

There are a number of reasons I can pinpoint to describe exactly why this web application has been so horrible to work with. Among them,

  • The architecture violates the KISS principle. The extremities of the data layer prove to be confounding, and buring controls inside controls (compositing) and then forking instances of them are a severe abuse of ASP.NET "flexibility".
  • OOP principles were completely ignored. Not a single data layer inherits from another. There is no business object among the "Business" objects' namespace, only data invocation stubs that wrap stored procedure execution with a transactional context, and DataTables for output. No POCO objects to represent any of the data or to reuse inherited code.
  • Tables, not stored procedures, should be used in basic CRUD operations. One should use stored procedures only in complex operations where multiple two-way queries must be accomplished to get a job done. Good for operations, bad for basic data I/O and model management.
  • Way too much emphasis on relying on Web Forms "featureset" and lifcycle (event raising, viewstate hacking, control compositing, etc.) to accomplish functionality, and way too little understanding and utilization of the basic birds and butterflies (HTML and script).
  • Way too little attention to developer productivity by failure to move the development database to the local switch, have adequate RAM, and provide adequate screen real estate to manage hundreds of database objects and hundreds of thousands of lines of code.
  • Admittance of the development manager of the sadly ignorant and costly attitude that "managers don't care about cleaning things up and refactoring, they just want to get things done and be done with it"--I say "ignorant and costly" because my billable hours were more than quadrupled versus having clean, editable code to begin with.
  • New features are not testable in isolation -- in fact, they aren't even compilable in isolation. I can compile and do lightweight testing of the data layer without more than a few heartbeats, but it takes two minutes to compile the web site just to see where my syntax or other compiler-detected errors are in my code additions (and I haven't been sleeping well lately so I'm hitting the Rebuild button and monitoring the Errors window an awful lot). 

Even as I study (ever so slowly) for MCPD certification for my own reasons while I'm at home (spare me the biased anti-Microsoft flames on that, I don't care) I'm finding that Microsoft end developers (Morts) and Microsofties (Redmondites) alike are struggling with the bulk of their own technology and are heaping up upon themselves the knowledge of their own infrastructure before fully appreciating the beauty and the simplicity of the pure basics. Fortunately, Microsoft has had enough, and they've been long and hard at the drawing board to reinvent ASP.NET with ASP.NET MVC. But my interests are not entirely, or not necessarily, MVC-related.

All I really want is for this big fat pillow to be taken off of my face, and all these multiple layers of coats and sweatshirts and mittens and ski pants and snow boots to be taken off me, so I can stomp around wearing just enough of what I need to be decent. I need to breathe, I need to move around, and I need to be able to do some ninja kung fu.

These experiences I've had with ASP.NET solutions often make me sit around brainstorming how I'd build the same solutions differently. It's always easy to be everyone's skeptic, and it requires humility to acknowledge that just because you didn't write something or it isn't in your style or flavor doesn't mean it's bad or poorly produced. Sometimes, however, it is. And most solutions built with Web Forms, actually, are.

My frustration isn't just with Web Forms. It's with corporations that build upon Internet Explorer rather than HTML+Javascript. It's with most ASP.NET web applications adopting a look-and-feel that seem to grow in a box that is controlled by Rendmondites, with few artistic deviators rocking the boat. It's with the server-driven view management rather than smart clients in script and markup. It's with nearly all development frameworks that cater towards the ASP.NET crowd being built for IIS (the server) and not for the browser (the client).

I intend to do my part, although intentions are easy, actions can be hard. But I've helped design an elaborate client-side MVC framework before, with great pride, I'm thinking about doing it again and implementing myself (I didn't have the luxury of real-world implementation [i.e. a site] last time, I only helped design it and wrote some of the core code) and open sourcing it for the ASP.NET crowd. I'm also thinking about building a certain kind of ASP.NET solution I've frequently needed to work with (CRM? CMS? Social? something else? *grin* I won't say just yet), that takes advantage of certain principles.

What principles? I need to establish these before I even begin. These have already worked their way into my head and my attitude and are already an influence in every choice I make in web architecture, and I think they're worth sharing.

1. Think dynamic HTML, not dynamically generated HTML. Think of HTML like food; do you want your fajitas sizzling when when it arrives and you have to use a fork and knife while you enjoy it fresh on your plate, or do you prefer your food preprocessed and shoved into your mouth like a dripping wet ball of finger-food sludge? As much as I love C#, and acknowledge the values of Java, PHP, Ruby on Rails, et al, the proven king and queen of the web right now, for most of the web's past, and for the indefinite future are the HTML DOM and Javascript. This has never been truer than now with jQuery, MooTools, and other (I'd rather not list them all) significant scripting libraries that have flooded the web development industry with client-side empowerment. Now with Microsoft adopting jQuery as a core asset for ASP.NET's future, there's no longer any excuse. Learn to develop the view for the client, not for the server.

Why? Because despite the fact that client-side debugging tools are less evolved than on the server (no edit-and-continue in VS, for example, and FireBug is itself buggy), the overhead of managing presentation logic in a (server) context that doesn't relate to the user's runtime is just too much to deal with sometimes. Server code often takes time to recompile, whereas scripts don't typically require compilation at all. While in theory there is plenty of control on the server to debug what's needed while you have control of it in your own predictable environment, in practice there are just too many stop-edit-retry cycles going on in server-oriented view management.

And here's why that is. The big reason to move view to the client is because developers are just writing WAY too much view, business, and data mangling logic in the same scope and context. Client-driven view management nearly forces the developer to isolate view logic from data. In ASP.NET Web Forms, your 3 tiers are database, data+view mangling on the server, and finally whatever poor and unlucky little animal (browser) has to suffer with the resulting HTML. ASP.NET MVC changes that to essentially five tiers: the database, the models, the controller, the server-side view template,and finally whatever poor and unlucky little animal has to suffer with the resulting HTML. (Okay, Microsoft might be changing that with adopting jQuery and promising a client solution, we'll see.)

Most importantly, client-driven views make for a much richer, more interactive UIX (User Interface/eXperience); you can, for example reveal/hide or enable/disable a set of sub-questions depending on if the user checks a checkbox, with instant gratification. The ASP.NET Web Forms model would have it automatically perform a form post to refresh the page with the area enabled/disabled/revealed/hidden depending on the checked state. The difference is profound--a millisecond or two versus an entire second or two.

2. Abandon ASP.NET Web Forms. RoR implements a good model, try gleaning from that. ASP.NET MVC might be the way of the future. But frankly, most of the insanely popular web solutions on the Internet are PHP-driven these days, and I'm betting that's because PHP is on a similar coding model as ASP classic. No MVC stubs. No code-behinds. All that stuff can be tailored into a site as a matter of discipline (one of the reasons why PHP added OOP), but you're not forced into a one-size-fits-all paradigm, you just write your HTML templates and go.

Why? Web Forms is a bear. Its only two advantages are the ability to drag-and-drop functionality onto a page and watch it go, and premier vender (Microsoft / Visual Studio / MSDN) support. But it's difficult to optimize, difficult to templatize, difficult to abstract away from business logic layers (if at least difficult in that it requires intentional discipline), and puts way too much emphasis on the lifecycle of the page hit and postback. Look around at the ASP.NET web forms solutions out there. Web Forms is crusty like Visual Basic is crusty. It was created for, and is mostly used for, corporate grunts who use B2B (business-to-business) or internal apps. The rest of the web sites who use ASP.NET Web Forms suffer greatly from the painful code bloat of the ASP.NET Web Forms coding model and the horrible end-user costs of page bloat and round-trip navigation.

Kudos to Guthrie, et al, who developed Web Forms, it is a neat technology, but it is absolutely NOT a one-size-fits-all platform any more than my winter coat from Minnesota is. So congratulations to Microsoft for picking up the ball and working on ASP.NET MVC.

3. Use callbacks, not postbacks. Sometimes a single little control, like a textbox that behaves like an auto-suggest combobox, just needs a dedicated URL to perform an AJAX query against. But also, in ASP.NET space, I envision the return of multiple <form>'s, with DHTML-based page MVC controllers powering them all, driving them through AJAX/XmlHttpRequest.

Why? Clients can be smart now. They should do the view processing, not the server. The browser standard has finally arrived to such a place that most people have browsers capable of true DOM/DHTML and Javascript with JSON and XmlHttpRequest support.

Clearing and redrawing the screen is as bad as 1980s BBS ANSI screen redraws. It's obsolete. We don't need to write apps that way. Postbacks are cheap; don't be cheap. Be agile; use patterns, practices, and techniques that save development time and energy while avoiding the loss of a fluid user experience. <form action="someplace" /> should *always* have an onsubmit handler that returns false but runs an AJAX-driven post. The page should *optionally* redirect, but more likely only the area of the form or a region of the page (a containing DIV perhaps) should be replaced with the results of the post. Retain your header and sidebar in the user experience, and don't even let the content area go white for a split second. Buffer the HTML and display it when ready.

ASP.NET AJAX has region refreshes already, but still supports only <form runat="server" /> (limit 1), and the code-behind model of ASP.NET AJAX remains the same. Without discipline of changing from postback to callback behavior, it is difficult to isolate page posts from componentized view behavior. Further, <form runat="server" /> should be considered deprecated and obsolete. Theoretically, if you *must* have ViewState information you can drive it all with Javascript and client-side controllers assigned to each form.

ASP.NET MVC can manage callbacks uniformly by defining a REST URL suffix, prefix, or querystring, and then assigning a JSON handler view to that URL, for example ~/employee/profile/jsmith?view=json might return the Javascript object that represents employee Joe Smith's profile. You can then use Javascript to pump HTML generated at the client into view based on the results of the AJAX request.

4. By default, allow users to log in without accessing a log in page. A slight tangent (or so it would seem), this is a UI design constraint, something that has been a pet peeve of mine ever since I realized that it's totally unnecessary to have a login page. If you don't want to put ugly Username/Password fields on the header or sidebar, use AJAX.

Why? Because if a user visits your site and sees something interesting and clicks on a link, but membership is required, the entire user experience is inturrupted by the disruption of a login screen. Instead, fade out to 60%, show a DHTML pop-up login, and fade in and continue forward. The user never leaves the page before seeing the link or functionality being accessed.

Imagine if Microsoft Windows' UAC, OS X's keyring, or GNOME's sudo auth, did a total clear-screen and ignored your action whenever it needed an Administrator password. Thankfully it doesn't work that way; the flow is paused with a small dialogue box, not flat out inturrupted.

5. Abandon the Internet Explorer "standard". This goes to corporate folks who target IE. I am not saying this as an anti-IE bigot. In fact, I'm saying this in Internet Explorer's favor. Internet Explorer 8 (currently not yet released, still in beta) introduces better web standards support than previous versions of Internet Explorer, and it's not nearly as far behind the trail of Firefox and WebKit (Safari, Chrome) as Internet Explorer 7 is. With this reality, web developers can finally and safely build W3C-compliant web applications without worrying too much about which browser vendor the user is using, and instead ask the user to get the latest version

Why? Supporting multiple different browsers typically means writing more than one version of a view. This means developer productivity is lost. That means that features get stripped out due to time constraints. That means that your web site is crappier. That means users will be upset because they're not getting as much of what they want. That means less users will come. And that means less money. So take on the "Write once, run anywhere" mantra (which was once Java's slogan back in the mid-90s) by writing W3C-compliant code, and leave behind only those users who refuse to update their favorite browsers, and you'll get a lot more done while reaching a broader market, if not now then very soon, such as perhaps 1/2 yr after IE 8 is released. Use Javascript libraries like jQuery to handle most of the browser differences that are left over, while at the same time being empowered to add a lot of UI functionality without postbacks. (Did I mention that postbacks are evil?)

6. When hiring, favor HTML+CSS+Javascript gurus who have talent and an eye for good UIX (User Interface/eXperience) over ASP.NET+database gurus. Yeah! I just said that!

Why? Because the web runs on the web! Surprisingly, most employers don't have any idea and have this all upside down. They favor database gurus as gods and look down upon UIX developers as children. But the fact is I've seen more ASP.NET+SQL guys who halfway know that stuff and know little of HTML+Javascript than I have seen AJAX pros, and honestly pretty much every AJAX pro is bright enough and smart enough to get down and dirty with BLL and SQL when the time comes. Personally, I can see why HTML+CSS+Javascript roles are paid less (sometimes a lot less) than the server-oriented developers--any script kiddie can learn HTML!--but when it comes to professional web development they are ignored WAY too much because of only that. The web's top sites require extremely brilliant front-end expertise, including Facebook, Hotmail, Gmail, Flickr, YouTube, MSNBC--even Amazon.com which most prominently features server-generated content but yet also reveals a significant amount of client-side expertise.

I've blogged it before and I'll mention it again, the one, first, and most recent time I ever had to personally fire a co-worker (due to my boss being out of town and my having authority, and my boss requesting it of me over the phone) was when I was working with an "imported" contractor who had a master's degree and full Microsoft certification, but could not copy two simple hyperlinks with revised URLs within less than 5-10 minutes while I watched. The whole office was in a gossipping frenzy, "What? Couldn't create a hyperlink? Who doesn't know HTML?! How could anyone not know HTML?!", but I realized that the core fundamentals have been taken for granted by us as technologists to such an extent that we've forgotten how important it is to value it in our hiring processes.

7.  ADO.NET direct SQL code or ORM. Pick one. Don't just use data layers. Learn OOP fundamentals. The ActiveRecord pattern is nice. Alternatively, if it's a really lightweight web solution, just go back to wring plain Jane SQL with ADO.NET. If you're using C# 3.0, which of course you are in the context of this blog entry, then use LINQ-to-SQL or LINQ-to-Entities. On the ORM side, however, I'm losing favor with some of them because they often cater to a particular crowd. I'm slow to say "enterprise" because, frankly, too many people assume the word "enterprise" for their solutions when they are anything but. Even web sites running at tens of thousands of hits a day and generating hundreds of thousands of dollars of revenue every month don't necessarily mean "enterprise". The term "enterprise" is more of a people management inference than a stability or quality effort. It's about getting many people on your team using the same patterns and not having loose and abrupt access to thrash the database. For that matter, the corporate slacks-and-tie crowd of ASP.NET "Morts" often can relate to "enterprise", and not even realize it. But for a very small team (10 or less) and especially for a micro ISV (developers numbering 5 or less) with a casual and agile attitude, take the word "enterprise" with a grain of salt. You don't need a gajillion layers of red tape. For that matter, though, smaller teams are usually small because of tighter budgets, and that usually means tighter deadlines, and that means developer productivity must reign right there alongside stability and performance. So find an ORM solution that emphasizes productivity (minimal maintenance and easily adaptable) and don't you dare trade routine refactoring for task-oriented focus as you'll end up just wasting everyone's time in the long run. Always include refactoring to simplicity in your maintenance schedule.

Why? Why go raw with ADO.NET direct SQL or choose an ORM? Because some people take the data layer WAY too far. Focus on what matters; take the effort to avoid the effort of fussing with the data tier. Data management is less important than most teams seem to think. The developer's focus should be on the UIX (User Interface/eXperience) and the application functionality, not how to store the data. There are three areas where the typical emphasis on data management is agreeably important: stability, performance (both of which are why we choose SQL Server over, oh, I dunno, XML files?) and queryability. The latter is important both for the application and for decision makers. But a fourth requirement is routinely overlooked, and that is the emphasis on being able to establish a lightweight developer workflow of working with data so that you can create features quickly and adapt existing code easily. Again, this is why a proper understanding of OOP, how to apply it, when to use it, etc, is emphasized all the time, by yours truly. Learn the value of abstraction and inheritence and of encapsulating interfaces (resulting in polymorphism). Your business objects should not be much more than POCO objects with application-realized properties. Adding a new simple data-persisted object, or modifying an existing one with, say, a new column, should not take more than a minute of one's time. Spend the rest of that time instead on how best to impress the user with a snappy, responsive user interface.

8. Callback-driven content should derive equally easily from your server, your partner's site, or some strange web service all the way in la-la land. We're aspiring for Web 3.0 now, but what happened to Web 2.0? We're building on top of it! Web 2.0 brought us mashups, single sign-ons, and cross-site social networking. FaceBook Applications are a classic demonstration of an excelling student of Web 2.0 now graduating and turning into a Web 3.0 student. Problem is, keeping the momentum going, who's driving this rig? If it's not you, you're missing out on the 3.0 vision.

Why? Because now you can. Hopefully by now you've already shifted the bulk of the view logic over to the client. And you've empowered your developers to focus on the front-end UIX. Now, though, the client view is empowered to do more. It still has to derive content from you, but in a callback-driven architecture, the content is URL-defined. As long as security implications are resolved, you now have the entire web at your [visitors'] disposal! Now turn it around to yourself and make your site benefit from it!

If you're already invoking web services, get that stuff off your servers! Web services queried from the server cost bandwidth and add significant time overhead before the page is released from the buffer to the client. The whole time you're fetching the results of a web service you're querying, the client is sitting there looking at a busy animation or a blank screen. Don't let that happen! Throw the client a bone and let it fetch the external resources on its own.

9. Pay attention to the UIX design styles of the non-ASP.NET Web 2.0/3.0 communities. There is such a thing as a "Web 2.0 look", whether we like to admit it or not; we web developers evolved and came up with innovations worth standardizing on, why can't designers evolve and come up with visual innovations worth standardizing on? If the end user's happiness is our goal, how are features and stable and performant code more important than aesthetics and ease of use? The problem is, one perspective of what "the Web 2.0 look" actually looks like is likely very different from another's or my own. I'm not speaking of heavy gloss or diagonal lines. I most certainly am not talking about the "bubble gum" look. (I jokingly mutter "Let's redesign that with diagonal lines and beveled corners!" now and then, but when I said that to my previous boss and co-worker, both of whom already looked down on me WAY more than they deserved to do, neither of them understood that I was joking. Or, at least, they didn't laugh or even smile.) No, but I am talking about the use of artistic elements, font choices and font styles, and layout characteristics that make a web site stand out from the crowd as being highly usable and engaging. 

Let's demonstrate, shall we? Here are some sites and solutions that deserve some praise. None of them are ASP.NET-oriented.

  • http://www.javascriptmvc.com/ (ugly colors but otherwise nice layout and "flow"; all functionality driven by Javascript; be sure to click on the "tabs")
  • http://www.deskaway.com/ (ignore the ugly logo but otherwise take in the beauty of the design and workflow; elegant font choice)
  • http://www.mosso.com/ (I really admire the visual layout of this JavaServer Pages driven site; fortunately I love the fact that they support ASP.NET on their product)
  • http://www.feedburner.com/ (these guys did a redesign not too terribly long ago; I really admire their selective use of background patterns, large-font textboxes, hover effects, and overall aesthetic flow)
  • http://www.phpbb.com/ (stunning layout, rock solid functionality, universal acceptance)
  • http://www.joomla.org/ (a beautiful and powerful open source CMS)
  • http://goplan.org/ (I don't like the color scheme but I do like the sheer simplicity
  • .. for that matter I also love the design and simplicity of http://www.curdbee.com/)

Now here are some ASP.NET-oriented sites. They are some of the most popular ASP.NET-driven sites and solutions, but their design characteristics, frankly, feel like the late 90s.

  • http://www.dotnetnuke.com/ (one of the most popular CMS/portal options in the open source ASP.NET community .. and, frankly, I hate it)
  • http://www.officelive.com/ (sign in and discover a lot of features with a "smart client" feel, but somehow it looks and feels slow, kludgy, and unrefined; I think it's because Microsoft doesn't get out much)
  • http://communityserver.com/ (it looks like a step in the right direction, but there's an awful lot of smoke and mirrors; follow the Community link and you'll see the best of what the ASP.NET community has to offer in the way of forums .. which frankly doesn't impress me as much as phpBB)
  • http://www.dotnetblogengine.net/ (my blog uses this, I like it well enough, but it's just one niche, and that's straight-and-simple blogs
  • http://subsonicproject.com/ (the ORM technology is very nice, but the site design is only "not bad", and the web site starter kit leave me shrugging with a shiver)

Let's face it, the ASP.NET community is not driven by designers.

Why? Why do I ramble on about such fluffy things? Because at my current job (see the intro text) the site design is a dump of one feature hastilly slapped on after another, and although the web app has a lot of features and plenty of AJAX to empower it here and there, it is, for the most part, an ugly and disgusting piece of cow dung in the area of UIX (User Interface/eXperience). AJAX functionality is based on third party components that "magically just work" while gobs and gobs of gobblygook code on the back end attempts to wire everything together, and what AJAX is there is both rare and slow, encumbered by page bloat and server bloat. The front-end appearance is amateurish, and I'm disheartened as a web developer to work with it.

Such seems to be the makeup of way too many ASP.NET solutions that I've seen.

10. Componentize the client. Use "controls" on the client in the same way you might use .ASCX controls on the server, and in the process of doing this, implement a lifecycle and communications subsystem on the client. This is what I want to do, and again I'm thinking of coming up with a framework to pursue it to compliment Microsoft's and others' efforts. If someone else (i.e. Microsoft) beats me to it, fine. I just hope theirs is better than mine.

Why? Well if you're going to emphasize the client, you need to be able to have a manageable development workflow.

ASP.NET thrives on the workflows of quick-tagging (<asp:XxxXxx runat="server" />) and drag-and-drop, and that's all part of the equation of what makes it so popular. But that's not all ASP.NET is good for. ASP.NET's greatest strengths are two: IIS and the CLR (namely the C# language). The quality of integration of C# with IIS is incredible. You get near-native-compiled-quality code with scripted text file ease of deployment, and the deployment is native to the server (no proxying, a la Apache->Tomcat->Java, or even FastCGI->PHP). So why not utilize these other benefits as a Javascript-based view seed rather than as generating the entirety of the view.

On the competitive front, take a look at http://www.wavemaker.com/. Talk about drag-and-drop coding for smart client-side applications, driven by a rich server back-end (Java). This is some serious competition indeed.

11. RESTful URIs, not postback or Javascript inline resets of entire pages. Too many developers of AJAX-driven smart client web apps are bragging about how the user never leaves the page. This is actually not ideal.

Why? Every time the primary section of content changes, in my opinion, it should have a URI, and that should be reflected (somehow) in the browser's Address field. Even if it's going to be impossible to make the URL SEO-friendly (because there are no predictable hyperlinks that are spiderable), the user should be able to return to the same view later, without stepping through a number of steps of logging in and clicking around. This is partly the very definition of the World Wide Web: All around the world, content is reflected with a URL.

12. Glean from the others. Learn CakePHP. Build a simple symfony or Code Igniter site. Watch the Ruby On Rails screencasts and consider diving in. And have you seen Jaxer lately?!

And absolutely, without hesitation, learn jQuery, which Microsoft will be supporting from here on out in Visual Studio and ASP.NET. Discover the plug-ins and try to figure out how you can leverage them in an ASP.NET environment.

Why? Because you've lived in a box for too long. You need to get out and smell the fresh air. Look at the people as they pass you by. You are a free human being. Dare yourself to think outside the box. Innovate. Did you know that most innovations are gleaning from other people's imaginative ideas and implemenations, and reapplying them in your own world, using your own tools? Why should Ruby on Rails have a coding workflow that's better than ASP.NET? Why should PHP be a significantly more popular platform on the public web than ASP.NET, what makes it so special besides being completely free of Redmondite ties? Can you interoperate with it? Have you tried? How can the innovations of Jaxer be applied to the IIS 7 and ASP.NET scenario, what can you do to see something as earth-shattering inside this Mortian realm? How can you leverage jQuery to make your web site do things you wouldn't have dreamed of trying to do otherwise? Or at least, how can you apply it to make your web application more responsive and interactive than the typical junk you've been pumping out?

You can be a much more productive developer. The whole world is at your fingertips, you only need to pay attention to it and learn how to leverage it to your advantage.

 

And these things, I believe, are what is going to drive the Web 1.0 Morts in the direction of Web 3.0, building on the hard work of yesteryear's progress and making the most of the most powerful, flexible, stable, and comprehensive server and web development technology currently in existence--ASP.NET and Visual Studio--by breaking out of their molds and entering into the new frontier.

kick it on DotNetKicks.com

Currently rated 3.0 by 4 people

  • Currently 3/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , ,

Opinion | Web Development | ASP.NET


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  December 2014  >>
MoTuWeThFrSaSu
24252627282930
1234567
891011121314
15161718192021
22232425262728
2930311234

View posts in large calendar