Two First Things Before Switching To Windows 8 (And A Rant On Single Identity)

by Jon Davis 29. November 2013 00:20

I am normally an eager if not early embracer, especially when it comes to Windows. Windows 8, however, was an exception. I've tried, numerous times, to embrace Windows 8, but it is such a different operating system that had so many problems at launch that I was far more eager to chuck it and continue calling Windows 7 "the best operating system ever". My biggest frustrations with it were just like everyone else's--no Start Menu, awkward accessibility, etc., since I don't use and don't want a tablet for computing, just like I don't want to put my fingerprints all over my computer monitor, hello!? But the dealbreaker was the fact that, among other things, I was trying to use Adobe Premiere Pro for video editing and I kept getting bluescreens, whereas in Windows 7 I didn't. And it wasn't random, it was unavoidable.

At this point I've gotten over the UI hell, since I've now spent more time with Windows Server 2012 than with Windows 8, and Server 2012 has the same UI more or less. But now that Windows 8.1 is out and seems to address so many of the UI and stability problems (I've yet to prove out the video driver stability improvements but I'm betting on the passing of a year's time), I got myself a little more comfortable with Windows 8.1 by replacing the host operating system for the VMWare VM I was using with my [now former] client. It was just a host OS, but it was a start. But I've moved on.

In my home office environment, making the switch to Windows 8.1 more permanently has been slowly creeping up my priority list particularly now that I have an opportunity of a clean slate, in multiple senses of the word, not the least of which is I had a freed up 500 GB SSD hard drive just begging to be added to my "power laptop" (a Dell) to complement its 2nd gen i7, its upgraded 16GB of RAM, and its already existing 500 GB SSD drive. Yes I know that 1TB SSD hard drives are out there now but I already have these. So last night and tonight have been spent poking around with the equivalent of a fresh machine, as this nice laptop is still only a year old.

The experience so far of switching up to Windows 8.1 has been smooth. The first things that went on this drive were Windows 8.1, Visual Studio 2013, and Adobe Creative Cloud (the latter of which has taken a back seat for my time but I plan on doing video editing on weekends again down the road). Oh, and Steam, of course. ;) All of these things and the apps that load under them were set up within hours and finished downloading and installing overnight while I was sleeping.

But in the last hour I ran into a concern that motivated me to post this blog entry about transitioning to Windows 8. It has to do with Microsoft accounts. Before I get into that, let me get one thing out of the way: the title mentions "two things", so the first is that if you hated Windows 8.0, try Windows 8.1, because the Windows 8.0 quirks are much more swallable now, which means that you won't be so severely distracted by all the nice new features, not the least of which is amazing startup time.

Now then. Microsoft Accounts. I want to like them. I want to love them. The truth is, I hate them. As a solution, it is oversold, and it is a reckless approach to a problem that many if not most people didn't have until Microsoft shoved the solution down their throats and made them have the problem that this is supposedly a solution for.

So before I go on ranting about that, here's the one other thing you should know. If you're tempted to follow the recommended procedure to setting up Windows 8.x but you want a local login/password for your computer or device and not one managed by your Microsoft Account, don't follow the recommended procedure. Look for small text for any opportunity to skip the association or creation of a Microsoft account for your device. But more importantly, once it is installed, even if it is a local user account, your account will be overhauled and converted to a Microsoft Account (managed online), and your username/password changed back to the Internet account username/password, unless you find that small text at every Microsoft Account login opportunity.

signin1signin2

If you want to use apps that require you to log into a Microsoft account, such as Microsoft Store, or the Games or Music apps, when your Windows profile is already a Microsoft Account profile then you might be logged in automatically, otherwise it'll prompt you and then all apps are associated with that account. You may not want to do that. I didn't want to do that. I don't want my Internet password to be my Windows password, and I certainly don't want my e-mail address to be visibly displayed to anyone who looks at my locked computer as the account name. I like "Jon". Why can't it just be "Jon"? Get off, Microsoft! Well, it's all good, I managed to stick with a local account profile, but as for these apps, there was a detail that I didn't notice until I did a triple-take--yep it took me three account retroactive conversions for me to notice the option. When you try to sign into a Microsoft Account enabled app like Games or Music and it begins the prompting process to convert your local user profile to a Microsoft Account profile, there is small text at the bottom that literally saves the day! It reads, "Sign into each app separately instead (not recommended)". No, of course it's not recommended because Microsoft wants your dependency upon their Microsoft Account cloud profile strategy to succeed so that they can win the cloud wars. *yawn* Seriously, if you want a local user profile and you didn't mind how in the last couple decades on Internet-enabled apps you had to reenter the same credenials or maintain a separate set of credentials, then yes this action is recommended.

I would also say that you should want a local user profile, and you should want to maintain separate credentials for different apps, and let me explain why.

I ran into this problem in Windows because everything I do with gaming is associated with one user profile, and everything I do with new software development is associated with another profile. But I want to log into Windows only once.

I don't want development and work related interests cluttering up my digital profile that exists for games, and I don't want my gaming interests cluttering up my digital profile that exists for development and work. Likewise, I don't want gamer friends poking around at my developer profile, and I don't want my developer friends poking around at my gaming history. Outside of Microsoft accounts, I have the same attitude about other social networks. I actually use each social network for a different kind of crowd. I use Facebook for family, church friends, and good acquaintenances I trust, and for occasional distracting entertainment. I use Twitter and Google+ for development and career interests and occasional entertainment and news. And so on. Now I know that Google+ has this great thing called circles, but here's the problem: you only get one sales pitch to the world, one profile, one face. I have a YouTube channel that has nothing to do with my work, I didn't want to put YouTubey stuff on it for co-workers and employers to see nor did I want work stuff to be seen by YouTube visitors. Fortunately Facebook and Google+ have "pages" identities, and that's a great start to helping with such problems, though I feel weird using "pages" for my alter egos rather than for products or organizations.

I have a problem with Microsoft making the problem worse. Having just one identity for every app and every web site is a bad, bad idea.

Even anonymity can be a good thing. I play my favorite game as "Stimpy" or as "Dingbat", I don't want people to know me by name, that's just creepy, who I really am is a non-essential element to my interaction with the application, except so long as I am uniquely identified and validated. I don't want to be known on the web site as "that one guy, with that fingerprint, who buys that food, who plays those games, who watches those videos, who expressed those comments". No. It's trending now to use Facebook identities and the like for comments to eliminate anonymity, and that to get commenters to stop being so malicious, but it’s making other problems worse. I don't want my Facebook friends and family to potentially know about my public comments on obscure articles and blog posts. No this isn't good, let me isolate my identity to my different interests, what I do over here, or over there, is none of your business!

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , ,

Opinion | Windows

Introducing XIO (xio.js)

by Jon Davis 3. September 2013 02:36

I spent the latter portion last week and the bulk of the holiday fleshing out the initial prototype of XIO ("ecks-eye-oh" or "zee-oh", I don't care at this point). It was intended to start out as an I/O library targeting everything (get it? X I/O, as in I/O for x), but that in turn forced me to make it a repository library with RESTful semantics. I still want to add stream-oriented functionality (WebSocket / long polling) to it to make it truly an I/O library. In the mean time, I hope people can find it useful as a consolidated interface library for storing and retrieving data.

You can access this project here: https://github.com/stimpy77/xio.js#readme

Here's a snapshot of the README file as it was at the time of this blog entry.



XIO (xio.js)

version 0.1.1 initial prototype (all 36-or-so tests pass)

A consistent data repository strategy for local and remote resources.

What it does

xio.js is a Javascript resource that supports reading and writing data to/from local data stores and remote servers using a consistent interface convention. One can write code that can be more easily migrated between storage locations and/or URIs, and repository operations are simplified into a simple set of verbs.

To write and read to and from local storage,

xio.set.local("mykey", "myvalue");
var value = xio.get.local("mykey")();

To write and read to and from a session cookie,

xio.set.cookie("mykey", "myvalue");
var value = xio.get.cookie("mykey")();

To write and read to and from a web service (as optionally synchronous; see below),

xio.post.mywebservice("mykey", "myvalue");
var value = xio.get.mywebservice("mykey")();

See the pattern? It supports localStorage, sessionStorage, cookies, and RESTful AJAX calls, using the same interface and conventions.

It also supports generating XHR functions and providing implementations that look like:

mywebservice.post("mykey", "myvalue");
var value = mywebservice.get("mykey")(); // assumes synchronous; see below
Optionally synchronous (asynchronous by default)

Whether you're working with localStorage or an XHR resource, each operation returns a promise.

When the action is synchronous, such as in working with localStorage, it returns a "synchronous promise" which is essentially a function that can optionally be immediately invoked and it will wrap .success(value) and return the value. This also works with XHR when async: false is passed in with the options during setup (define(..)).

The examples below are the same, only because XIO knows that the localStorage implementation of get is synchronous.

Aynchronous convention: var val; xio.get.local('mykey').success(function(v) { val = v; });

Synchronous convention: var val = xio.get.local('mykey')();

Generated operation interfaces

Whenever a new repository is defined using XIO, a set of supported verb and their implemented functions is returned and can be used as a repository object. For example:

var myRepository = xio.define('myRepository', { 
    url: '/myRepository?key={0}',
    methods: ["GET", "POST", "PUT", "DELETE"]
});

.. would populate the variable myRepository with:

{
    get: function(key) { /* .. */ },
    post: function(key, value) { /* .. */ },
    put: function(key, value) { /* .. */ },
    delete: function(key) { /* .. */ }
}

.. and each of these would return a promise.

XIO's alternative convention

But the built-in convention is a bit unique using xio[action][repository](key, value) (i.e.xio.post.myRepository("mykey", {first: "Bob", last: "Bison"}), which, again, returns a promise.

This syntactical convention, with the verb preceding the repository, is different from the usual convention of_object.method(key, value).

Why?!

The primary reason was to be able to isolate the repository from the operation, so that one could theoretically swap out one repository for another with minimal or no changes to CRUD code. For example,

var repository = "local"; // use localStorage for now; 
                          // replace with "my_restful_service" when ready 
                          // to integrate with the server
xio.post[repository](key, value).complete(function() {

    xio.get[repository](key).success(function(val) {
        console.log(val);
    });

});

Note here how "repository" is something that can move around. The goal, therefore, is to make disparate repositories such as localStorage and RESTful web service targets support the same features using the same interface.

As a bit of an experiment, this convention of xio[verb][repository] also seems to read and write a little better, even if it's a bit weird at first to see. The thinking is similar to the verb-target convention in PowerShell. Rather than taking a repository and working with it independently with assertions that it will have some CRUD operations available, the perspective is flipped and you are focusing on what you need to do, the verbs, first, while the target becomes more like a parameter or a known implementation of that operation. The goal is to dumb down CRUD operation concepts and repositories and refocus on the operations themselves so that, rather than repositories having an unknown set of operations with unknown interface styles and other features, instead, your standard CRUD operations, which are predictable, have a set of valid repository targets that support those operations.

This approach would have been entirely unnecessary and pointless if Javascript inherently supported interfaces, because then we could just define a CRUD interface and write all our repositories against those CRUD operations. But it doesn't, and indeed with the convention of closures and modules, it really can't.

Meanwhile, when you define a repository with xio.define(), as was described above and detailed again below, it returns an object that contains the operations (get(), post(), etc) that it supports. So if you really want to use the conventional repository[method](key, value) approach, you still can!

Download

Download here: https://raw.github.com/stimpy77/xio.js/master/src/xio.js

To use the whole package (by cloning this repository)

.. and to run the Jasmine tests, you will need Visual Studio 2012 and a registration of the .json file type with IIS / IIS Express MIME types. Open the xio.js.csproj file.

Dependencies

jQuery is required for now, for XHR-based operations, so it's not quite ready for node.js. This dependency requirement might be dropped in the future.

Basic verbs

See xio.verbs:

  • get(key)
  • set(key, value); used only by localStorage, sessionStorage, and cookie
  • put(key, data); defaults to "set" behavior when using localStorage, sessionStorage, or cookie
  • post(key, data); defaults to "set" behavior when using localStorage, sessionStorage, or cookie
  • delete(key)
  • patch(key, patchdata); implemented based on JSON/Javascript literals field sets (send only deltas)
Examples
// initialize

var xio = Xio(); // initialize a module instance named "xio"
localStorage
xio.set.local("my_key", "my_value");
var val = xio.get.local("my_key")();
xio.delete.local("my_key");

// or, get using asynchronous conventions, ..    
var val;
xio.get.local("my_key").success(function(v) 
    val = v;
});

xio.set.local("my_key", {
    first: "Bob",
    last: "Jones"
}).complete(function() {
    xio.patch.local("my_key", {
        last: "Jonas" // keep first name
    });
});
sessionStorage
xio.set.session("my_key", "my_value");
var val = xio.get.session("my_key")();
xio.delete.session("my_key");
cookie
xio.set.cookie(...)

.. supports these arguments: (key, value, expires, path, domain)

Alternatively, retaining only the xio.set["cookie"](key, value), you can automatically returned helper replacer functions:

xio.set["cookie"](skey, svalue)
    .expires(Date.now() + 30 * 24 * 60 * 60000))
    .path("/")
    .domain("mysite.com");

Note that using this approach, while more expressive and potentially more convertible to other CRUD targets, also results in each helper function deleting the previous value to set the value with the new adjustment.

session cookie
xio.set.cookie("my_key", "my_value");
var val = xio.get.cookie("my_key")();
xio.delete.cookie("my_key");
persistent cookie
xio.set.cookie("my_key", "my_value", new Date(Date.now() + 30 * 24 * 60 * 60000));
var val = xio.get.cookie("my_key")();
xio.delete.cookie("my_key");
web server resource (basics)
var define_result =
    xio.define("basic_sample", {
                url: "my/url/{0}/{1}",
                methods: [ xio.verbs.get, xio.verbs.post, xio.verbs.put, xio.verbs.delete ],
                dataType: 'json',
                async: false
            });
var promise = xio.get.basic_sample([4,12]).success(function(result) {
   // ..
});
// alternatively ..
var promise_ = define_result.get([4,12]).success(function(result) {
   // ..
});

The define() function creates a verb handler or route.

The url property is an expression that is formatted with the key parameter of any XHR-based CRUD operation. The key parameter can be a string (or number) or an array of strings (or numbers, which are convertible to strings). This value will be applied to the url property using the same convention as the typical string formatters in other languages such as C#'s string.Format().

Where the methods property is defined as an array of "GET", "POST", etc, for each one mapping to standard XIO verbs an XHR route will be internally created on behalf of the rest of the options defined in the options object that is passed in as a parameter to define(). The return value of define() is an object that lists all of the various operations that were wrapped for XIO (i.e. get(), post(), etc).

The rest of the options are used, for now, as a jQuery's $.ajax(..., options) parameter. The async property defaults to false. When async is true, the returned promise is wrapped with a "synchronous promise", which you can optionally immediately invoke with parens (()) which will return the value that is normally passed into .success(function (value) { .. }.

In the above example, define_result is an object that looks like this:

{
    get: function(key) { /* .. */ },
    post: function(key, value) { /* .. */ },
    put: function(key, value) { /* .. */ },
    delete: function(key) { /* .. */ }
}

In fact,

define_result.get === xio.get.basic_sample

.. should evaluate to true.

Sample 2:

var ops = xio.define("basic_sample2", {
                get: function(key) { return "value"; },
                post: function(key,value) { return "ok"; }
            });
var promise = xio.get["basic_sample2"]("mykey").success(function(result) {
   // ..
});

In this example, the get() and post() operations are explicitly declared into the defined verb handler and wrapped with a promise, rather than internally wrapped into XHR/AJAX calls. If an explicit definition returns a promise (i.e. an object with .success and .complete), the returned promise will not be wrapped. You can mix-and-match both generated XHR calls (with the url and methods properties) as well as custom implementations (with explicit get/post/etc properties) in the options argument. Custom implementations will override any generated implementations if they conflict.

web server resource (asynchronous GET)
xio.define("specresource", {
                url: "spec/res/{0}",
                methods: [xio.verbs.get],
                dataType: 'json'
            });
var val;
xio.get.specresource("myResourceAction").success(function(v) { // gets http://host_server/spec/res/myResourceAction
    val = v;
}).complete(function() {
    // continue processing with populated val
});
web server resource (synchronous GET)
xio.define("synchronous_specresources", {
                url: "spec/res/{0}",
                methods: [xio.verbs.get],
                dataType: 'json',
                async: false // <<==!!!!!
            });
var val = xio.get.synchronous_specresources("myResourceAction")(); // gets http://host_server/spec/res/myResourceAction
web server resource POST
xio.define("contactsvc", {
                url: "svcapi/contact/{0}",
                methods: [ xio.verbs.get, xio.verbs.post ],
                dataType: 'json'
            });
var myModel = {
    first: "Fred",
    last: "Flinstone"
}
var val = xio.post.contactsvc(null, myModel).success(function(id) { // posts to http://host_server/svcapi/contact/
    // model has been posted, new ID returned
    // validate:
    xio.get.contactsvc(id).success(function(contact) {  // gets from http://host_server/svcapi/contact/{id}
        expect(contact.first).toBe("Fred");
    });
});
web server resource (DELETE)
xio.delete.myresourceContainer("myresource");
web server resource (PUT)
xio.define("contactsvc", {
                url: "svcapi/contact/{0}",
                methods: [ xio.verbs.get, xio.verbs.post, xio.verbs.put ],
                dataType: 'json'
            });
var myModel = {
    first: "Fred",
    last: "Flinstone"
}
var val = xio.post.contactsvc(null, myModel).success(function(id) { // posts to http://host_server/svcapi/contact/
    // model has been posted, new ID returned
    // now modify:
    myModel = {
        first: "Carl",
        last: "Zeuss"
    }
    xio.put.contactsvc(id, myModel).success(function() {  /* .. */ }).error(function() { /* .. */ });
});
web server resource (PATCH)
xio.define("contactsvc", {
                url: "svcapi/contact/{0}",
                methods: [ xio.verbs.get, xio.verbs.post, xio.verbs.patch ],
                dataType: 'json'
            });
var myModel = {
    first: "Fred",
    last: "Flinstone"
}
var val = xio.post.contactsvc(null, myModel).success(function(id) { // posts to http://host_server/svcapi/contact/
    // model has been posted, new ID returned
    // now modify:
    var myModification = {
        first: "Phil" // leave the last name intact
    }
    xio.patch.contactsvc(id, myModification).success(function() {  /* .. */ }).error(function() { /* .. */ });
});
custom implementation and redefinition
xio.define("custom1", {
    get: function(key) { return "teh value for " + key};
});
xio.get.custom1("tehkey").success(function(v) { alert(v); } ); // alerts "teh value for tehkey";
xio.redefine("custom1", xio.verbs.get, function(key) { return "teh better value for " + key; });
xio.get.custom1("tehkey").success(function(v) { alert(v); } ); // alerts "teh better value for tehkey"
var custom1 = 
    xio.redefine("custom1", {
        url: "customurl/{0}",
        methods: [xio.verbs.post],
        get: function(key) { return "custom getter still"; }
    });
xio.post.custom1("tehkey", "val"); // asynchronously posts to URL http://host_server/customurl/tehkey
xio.get.custom1("tehkey").success(function(v) { alert(v); } ); // alerts "custom getter still"

// oh by the way,
for (var p in custom1) {
    if (custom1.hasOwnProperty(p) && typeof(custom1[p]) == "function") {
        console.log("custom1." + p); // should emit custom1.get and custom1.post
    }
}

Future intentions

WebSockets and WebRTC support

The original motivation to produce an I/O library was actually to implement a WebSockets client that can fallback to long polling, and that has no dependency upon jQuery. Instead, what has so far become implemented has been a standard AJAX interface that depends upon jQuery. Go figure.

If and when WebSocket support gets added, the next step will be WebRTC.

Meanwhile, jQuery needs to be replaced with something that works fine on nodejs.

Additionally, in a completely isolated parallel path, if no progress is made by the ASP.NET SignalR team to make the SignalR client freed from jQuery, xio.js might become tailored to be a somewhat code compatible client implementation or a support library for a separate SignalR client implementation.

Service Bus, Queuing, and background tasks support

At an extremely lightweight scale, I do want to implement some service bus and queue features. For remote service integration, this would just be more verbs to sit on top of the existing CRUD operations, as well as WebSockets / long polling / SignalR integration. This is all fairly vague right now because I am not sure yet what it will look like. On a local level, however, I am considering integrating with Web Workers. It might be nice to use XIO to manage deferred I/O via the Web Workers feature. There are major limitations to Web Workers, however, such as no access to the DOM, so I am not sure yet.

Other notes

If you run the Jasmine tests, make sure the .json file type is set up as a mime type. For example, IIS and IIS Express will return a 403 otherwise. Google reveals this: http://michaellhayden.blogspot.com/2012/07/add-json-mime-type-to-iis-express.html

License

The license for XIO is pending, as it's not as important to me as getting some initial feedback. It will definitely be an attribution-based license. If you use xio.js as-is, unchanged, with the comments at top, you definitely may use it for any project. I will drop in a license (probably Apache 2 or BSD or Creative Commons Attribution or somesuch) in the near future.

A Consistent Approach To Client-Side Cache Invalidation

by Jon Davis 10. August 2013 17:40

Download the source code for this blog entry here: ClientSideCacheInvalidation.zip

TL;DR?

Please scroll down to the bottom of this article to review the summary.

I ran into a problem not long ago where some JSON results from an AJAX call to an ASP.NET MVC JsonResult action were being cached by the browser, quite intentionally by design, but were no longer up-to-date, and without devising a new approach to route manipulation or any of the other fundamental infrastructural designs for the endpoints (because there were too many) our hands were tied. The caching was being done using the ASP.NET OutputCacheAttribute on the action being invoked in the AJAX call, something like this (not really, but this briefly demonstrates caching):

[OutputCache(Duration = 300)]
public JsonResult GetData()
{
return Json(new
{
LastModified = DateTime.Now.ToString()
}, JsonRequestBehavior.AllowGet);
}
@model dynamic
@{
ViewBag.Title = "Home";
}
<h2>Home</h2>
<div id="results"></div>
<div><button id="reload">Reload</button></div>
@section scripts {
<script>
var $APPROOT = "@Url.Content("~/")";
$.getJSON($APPROOT + "Home/GetData", function (o) {
$('#results').text("Last modified: " + o.LastModified);
});
$('#reload').on('click', function() {
window.location.reload();
});
</script>
}

Since we were using a generalized approach to output caching (as we should), I knew that any solution to this problem should also be generalized. My first thought was in the mistaken assumption that the default [OutputCache] behavior was to rely on client-side caching, since client-side caching was what I was observing while using Fiddler. (Mind you, in the above sample this is not the case, it is actually server-side, but this is probably because of the amount of data being transferred. I’ll explain after I explain what I did in my false assumption.)

Microsoft’s default convention for implementing cache invalidation is to rely on “VaryBy..” semantics, such as varying the route parameters. That is great except that the route and parameters were currently not changing in our implementation.

So, my initial proposal was to force the caching to be done on the server instead of on the client, and to invalidate when appropriate.

 

public JsonResult DoSomething()
{
// 
// Do something here that has a side-effect
// of making the cached data stale
// 
Response.RemoveOutputCacheItem(Url.Action("GetData"));
return Json("OK");
}
[OutputCache(Duration = 300, Location = OutputCacheLocation.Server)]
public JsonResult GetData()
{
return Json(new
{
LastModified = DateTime.Now.ToString()
}, JsonRequestBehavior.AllowGet);
}

 

 

<button id="invalidate">Invalidate</button></div>

 

 

$('#invalidate').on('click', function() {
$.post($APPROOT + "Home/DoSomething", null, function(o) {
window.location.reload();
}, 'json');
});

 

image
While Reload has no effect on the Last modified value, the
Invalidate button causes the date to increment.

When testing, this actually worked quite well. But concerns were raised about the payload of memory on the server. Personally I think the memory payload in practically any server-side caching is negligible, certainly if it is small enough that it would be transmitted over the wire to a client, so long as it is measured in kilobytes or tens of kilobytes and not megabytes. I think the real concern is that transmission; the point of caching is to make the user experience as smooth and seamless as possible with minimal waiting, so if the user is waiting for a (cached) payload, while it may be much faster than the time taken to recalculate or re-acquire the data, it is still measurably slower than relying on browser cache.

The default implementation of OutputCacheAttribute is actually OutputCacheLocation.Any. This indicates that the cached item can be cached on the client, on a proxy server, or on the web server. From my tests, for tiny payloads, the behavior seemed to be caching on the server and no caching on the client; for a large payload from GET requests with querystring parameters seemed to be caching on the client but with an HTTP query with an “If-Modified-Since” header, resulting in a 304 Not Modified on the server (indicating it was also cached on the server but verified by the server that the client’s cache remains valid); and for a large payload from GET requests with all parameters in the path, the behavior seemed to be caching on the client without any validation checking from the client (no HTTP request for an If-Modified-Since check). Now, to be quite honest I am only guessing that these were the distinguishing factors of these behavior observations. Honestly, I saw variations of these behaviors happening all over the place as I tinkered with scenarios, and this was the initial pattern I felt I was observing.

At any rate, for our purposes we were currently stuck with relying on “Any” as the location, which in theory would remove server-side caching if the server ran short on RAM (in theory, I don’t know, although the truth can probably be researched, which I don’t have time to get into). The point of all this is, we have client-side caching that we cannot get away from.

So, how do you invalidate the client-side cache? Technically, you really can’t. The browser controls the cache bucket and no browsers provide hooks into the cache to invalidate them. But we can get smart about this, and work around the problem, by bypassing the cached data. Cached HTTP results are stored on the basis of varying by the full raw URL on HTTP GET methods, they are cached with an expiration (in the above sample’s case, 300 seconds, or 5 minutes), and are only cached if allowed to be cached in the first place as per the HTTP header directives in the HTTP response. So, to bypass the cache you don’t cache, or you need to know up front how long the cache should remain until it expires—neither of these being acceptable in a dynamic application—or you need to use POST instead of GET, or you need to vary up the URL.

Microsoft originally got around the caching problem in ASP.NET 1.x by forcing the “normal” development cycle in the lifecycle of <form> tags that always used the POST method over HTTP. Responses from POST requests are never cached. But POSTing is not clean as it does not follow the semantics of the verbiage if nothing is being sent up and data is only being retrieved.

You can also use ETag in the HTTP headers, which isn’t particularly helpful in a dynamic application as it is no different from a URL + expiration policy.

To summarize, to control cache:

  • Disable caching from the server in the Response header (Pragma: no-cache)
  • Predict the lifetime of the content and use an expiration policy
  • Use POST not GET
  • Etag
  • Vary the URL (case-sensitive)

Given our options, we need to vary up the URL. There a number of approaches to this, but almost all of the approaches involve relying on appending or modifying the querystring with parameters that are expected to be ignored by the server.

$.getJSON($APPROOT + "Home/GetData?_="+Date.now(), function (o) {
$('#results').text("Last modified: " + o.LastModified);
});

In this sample, the URL is appended with “?_=”+Date.now(), resulting in this URL in the GET:

/Home/GetData?_=1376170287015

This technique is often referred to as cache-busting. (And if you’re reading this blog article, you’re probably rolling your eyes. “Duh.”) jQuery inherently supports cache-busting, but it does not do it on its own from $.getJSON(), it only does it in $.ajax() when the options parameter includes {cache: false}, unless you invoke $.ajaxSetup({ cache: false }); first to disable all caching. Otherwise, for $.getJSON() you would have to do it manually by appending the URL. (Alright, you can stop rolling your eyes at me now, I’m just trying to be thorough here..)

This is not our complete solution. We have a couple problems we still have to solve.

First of all, in a complex client codebase, hacking at the URL from application logic might not be the most appropriate approach. Consider if you’re using Backbone.js with routes that synchronize objects to and from the server. It would be inappropriate to modify the routes themselves just for cache invalidation. A more generalized cache invalidation technique needs to be implemented in the XHR-invoking AJAX function itself. The approach in doing this will depend upon your Javascript libraries you are using, but, for example, if jQuery.getJSON() is being used in application code, then jQuery.getJSON itself could perhaps be replaced with an invalidation routine.

var gj = $.getJSON;
$.getJSON = function (url, data, callback) {
url = invalidateCacheIfAppropriate(url); // todo: implement something like this
return gj.call(this, url, data, callback);
};

This is unconventional and probably a bad example since you’re hacking at a third party library, a better approach might be to wrap the invocation of $.getJSON() with an application function.

var getJSONWrapper = function (url, data, callback) {
url = invalidateCacheIfAppropriate(url); // todo: implement something like this
return $.getJSON(url, data, callback);
};

And from this point on, instead of invoking $.getJSON() in application code, you would invoke getJSONWrapper, in this example.

The second problem we still need to solve is that the invalidation of cached data that derived from the server needs to be triggered by the server because it is the server, not the client, that knows that client cached data is no longer up-to-date. Depending on the application, the client logic might just know by keeping track of what server endpoints it is touching, but it might not! Besides, a server endpoint might have conditional invalidation triggers; the data might be stale given specific conditions that only the server may know and perhaps only upon some calculation. In other words, invalidation needs to be pushed by the server.

One brute force, burdensome, and perhaps a little crazy approach to this might be to use actual “push technology”, formerly “Comet” or “long-polling”, now WebSockets, implemented perhaps with ASP.NET SignalR, where a connection is maintained between the client and the server and the server then has this open socket that can push invalidation flags to the client.

We had no need for that level of integration and you probably don’t either, I just wanted to mention it because it might come back as food for thought for a related solution. One scenario I suppose where this might be useful is if another user of the web application has caused the invalidation, in which case the current user will not be in the request/response cycle to acquire the invalidation flag. Otherwise, it is perhaps a reasonable assumption that invalidation is only needed, and only triggered, in the context of a user’s own session. If not, perhaps it is a “good enough” assumption even if it is sometimes not true. The expiration policy can be set low enough that a reasonable compromise can be made between the current user’s changes and changes invoked by other systems or other users.

While we may not know what server endpoint might introduce the invalidation of client cache data, we could assume that the invalidation will be triggered by any server endpoint(s), and build invalidation trigger logic on the response of server HTTP responses.

To begin implementing some sort of invalidation trigger on the server I could flag invalidations to the client using HTTP header(s).

public JsonResult DoSomething()
{
//
// Do something here that has a side-effect
// of making the cached data stale
//
InvalidateCacheItem(Url.Action("GetData"));
return Json("OK");
}
public void InvalidateCacheItem(string url)
{
Response.RemoveOutputCacheItem(url); // invalidate on server
Response.AddHeader("X-Invalidate-Cache-Item", url); // invalidate on client
}
[OutputCache(Duration = 300)]
public JsonResult GetData()
{
return Json(new
{
LastModified = DateTime.Now.ToString()
}, JsonRequestBehavior.AllowGet);
}

At this point, the server is emitting a trigger to the HTTP client that says that “as a result of a recent operation, that other URL, the one for GetData, is no longer valid for your current cache, if you have one”. The header alone can be handled by different client implementations (or proxies) in different ways. I didn’t come across any “standard” HTTP response that does this “officially”, so I’ll come up with a convention here.

image

Now we need to handle this on the client.

Before I do anything first of all I need to refactor the existing AJAX functionality on the client so that instead of using $.getJSON, I might use $.ajax or some other flexible XHR handler, and wrap it all in custom functions such as httpGET()/httpPOST() and handleResponse().

var httpGET = function(url, data, callback) {
return httpAction(url, data, callback, "GET");
};
var httpPOST = function (url, data, callback) {
return httpAction(url, data, callback, "POST");
};
var httpAction = function(url, data, callback, method) {
url = cachebust(url);
if (typeof(data) === "function") {
callback = data;
data = null;
}
$.ajax(url, {
data: data,
type: "GET",
success: function(responsedata, status, xhr) {
handleResponse(responsedata, status, xhr, callback);
}
});
};
var handleResponse = function (data, status, xhr, callback) {
handleInvalidationFlags(xhr);
callback.call(this, data, status, xhr);
};
function handleInvalidationFlags(xhr) {
// not yet implemented
};
function cachebust(url) {
// not yet implemented
return url;
};
// application logic
httpGET($APPROOT + "Home/GetData", function(o) {
$('#results').text("Last modified: " + o.LastModified);
});
$('#reload').on('click', function() {
window.location.reload();
});
$('#invalidate').on('click', function() {
httpPOST($APPROOT + "Home/Invalidate", function (o) {
window.location.reload();
});
});

At this point we’re not doing anything yet, we’ve just broken up the HTTP/XHR functionality into wrapper functions that we can now modify to manipulate the request and to deal with the invalidation flag in the response. Now all our work will be in handleInvalidationFlags() for capturing that new header we just emitted from the server, and cachebust() for hijacking the URLs of future requests.

To deal with the invalidation flag in the response, we need to detect that the header is there, and add the cached item to a cached data set that can be stored locally in the browser with web storage. The best place to put this cached data set is in sessionStorage, which is supported by all current browsers. Putting it in a session cookie (a cookie with no expiration flag) works but is less ideal because it adds to the payload of all HTTP requests. Putting it in localStorage is less ideal because we do want the invalidation flag(s) to go away when the browser session ends, because that’s when the original browser cache will expire anyway. There is one caveat to sessionStorage: if a user opens a new tab or window, the browser will drop the sessionStorage in that new tab or window, but may reuse the browser cache. The only workaround I know of at the moment is to use localStorage (permanently retaining the invalidation flags) or a session cookie. In our case, we used a session cookie.

Note also that IIS is case-insensitive on URI paths, but HTTP itself is not, and therefore browser caches will not be. We will need to ignore case when matching URLs with cache invalidation flags.

Here is a more or less complete client-side implementation that seems to work in my initial test for this blog entry.

function handleInvalidationFlags(xhr) {
// capture HTTP header
var invalidatedItemsHeader = xhr.getResponseHeader("X-Invalidate-Cache-Item");
if (!invalidatedItemsHeader) return;
invalidatedItemsHeader = invalidatedItemsHeader.split(';');
// get invalidation flags from session storage
var invalidatedItems = sessionStorage.getItem("invalidated-cache-items");
invalidatedItems = invalidatedItems ? JSON.parse(invalidatedItems) : {};
// update invalidation flags data set
for (var i in invalidatedItemsHeader) {
invalidatedItems[prepurl(invalidatedItemsHeader[i])] = Date.now();
}
// store revised invalidation flags data set back into session storage
sessionStorage.setItem("invalidated-cache-items", JSON.stringify(invalidatedItems));
}
// since we're using IIS/ASP.NET which ignores case on the path, we need a function to force lower-case on the path
function prepurl(u) {
return u.split('?')[0].toLowerCase() + (u.indexOf("?") > -1 ? "?" + u.split('?')[1] : "");
}
function cachebust(url) {
// get invalidation flags from session storage
var invalidatedItems = sessionStorage.getItem("invalidated-cache-items");
invalidatedItems = invalidatedItems ? JSON.parse(invalidatedItems) : {};
// if item match, return concatonated URL
var invalidated = invalidatedItems[prepurl(url)];
if (invalidated) {
return url + (url.indexOf("?") > -1 ? "&" : "?") + "_nocache=" + invalidated;
}
// no match; return unmodified
return url;
}

Note that the date/time value of when the invalidation occurred is permanently stored as the concatenation value. This allows the data to remain cached, just updated to that point in time. If invalidation occurs again, that concatenation value is revised to the new date/time.

Running this now, after invalidation is triggered by the server, the subsequent request of data is appended with a cache-buster querystring field.

image

 

In Summary, ..

.. a consistent approach to client-side cache invalidation triggered by the server might be by following these steps.

  1. Use X-Invalidate-Cache-Item as an HTTP response header to flag potentially cached URLs as expired. You might consider using a semicolon-delimited response to list multiple items. (Do not URI-encode the semicolon when using it as a URI list delimiter.) Semicolon is a reserved/invalid character in URI and is a valid delimiter in HTTP headers, so this is valid.
  2. Someday, browsers might support this HTTP response header by automatically invalidating browser cache items declared in this header, which would be awesome. In the mean time ...
  3. Capture these flags on the client into a data set, and store the data set into session storage in the format:
    		{
    	"http://url.com/route/action": (date_value_of_invalidation_flag),
    	"http://url.com/route/action/2": (date_value_of_invalidation_flag)
    	}
    	
  4. Hijack all XHR requests so that the URL is appropriately appended with cachebusting querystring parameter if the URL was found in the invalidation flags data set, i.e. http://url.com/route/action becomes something like http://url.com/route/action?_nocache=(date_value_of_invalidation_flag), being sure to hijack only the XHR request and not any logic that generated the URL in the first place.
  5. Remember that IIS and ASP.NET by default convention ignore case (“/Route/Action” == “/route/action”) on the path, but the HTTP specification does not and therefore the browser cache bucket will not ignore case. Force all URL checks for invalidation flags to be case-insensitive to the left of the querystring (if there is a querystring, otherwise for the entire URL).
  6. Make sure the AJAX requests’ querystring parameters are in consistent order. Changing the sequential order of parameters may be handled the same on the server but will be cached differently on the client.
  7. These steps are for “pull”-based XHR-driven invalidation flags being pulled from the server via XHR. For “push”-based invalidation triggered by the server, consider using something like a SignalR channel or hub to maintain an open channel of communication using WebSockets or long polling. Server application logic can then invoke this channel or hub to send an invalidation flag to the client or to all clients.
  8. On the client side, an invalidation flag “push” triggered in #7 above, for which #1 and #2 above would no longer apply, can still utilize #3 through #6.

You can download the project I used for this blog entry here: ClientSideCacheInvalidation.zip

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

ASP.NET | C# | Javascript | Techniques | Web Development

In Consideration Of Going Solo

by Jon Davis 21. June 2013 07:46

I’m beat. I spent today packing up items I had valued—a guitar, a nice camera, a big collection of Stargate DVDs—to ship off to various buyers around the country. I had no regrets as I went dumpster-diving at Guitar Center trying to fetch a shipping container suited for the size and shape of a guitar. After all, I need the chump change this transaction will produce. It will cover a fraction of my rent.

Ever play the game EVE Online? I used to. But I was never particularly good at it. I tended to enjoy the agent missions, and I was just not quite good enough to be of great help in PvP team versus team gameplay, mainly because I was terribly intimidated by actual human beings. Nevertheless, I came back to it now and then for a month or two, each time I’d look for a corp (EVE’s equivalent of a guild) to join. I’d end up staying solo, though, and ultimately I’d let the account subscription expire again, either before or after I’d get dismissed from the corp for not logging in enough. Each time I play EVE, indeed each time I tell myself “okay I’m going to do better this time at devoting my attention to a corp”, it becomes more pointless, because a) my corp history would be increasingly filled up with a strangely high number of employment records, and b) I’d be the new guy in the big, long-standing corp again. I’d have to learn the names again. I’d have to understand the corp’s ways of going about the game again. And they’d look at my history and detect what I already know—I’m either a flake or I’m a creep. Not that I choose to be either. But this was a lose-lose situation, and a vicious cycle. And so I end up logging back into EVE Online now and then, just me, hauling some simple battleship, knocking out non-player characters as I do mundane agent missions. And then I get bored by it all, and stop logging in .. until next time.

It does sadden me that I fear that my EVE Online profile and history may be a reflection upon my real-world employment history. And it isn’t even that I don’t have the capacity to be an excellent team member, or to produce excellent output, or to exhibit a personality that people can get along with. It’s that, to say as much, is a little jarring. I can be an excellent team member; I can produce excellent output; I can exhibit a nice personality. What happens, unfortunately, is that each day, indeed each hour, I have to choose to do these things, because the moment I let up my guard, my natural tendencies kick in. And they are ugly. And when it happens, I run the risk of being no longer the man of charm and professional skill, but a man of incidents. A loose cannon. And after a lot of internal checking, I am left with a mess of conclusions.

1) I have lacked the ability to demonstrate respect for authority. And it isn’t that I don’t recognize authority or appreciate how finances and business operations flow. It’s that my bosses are always wrong. Just kidding. It’s that as time has gone on my own experience in the industry has begun to match or exceed my bosses and so now I am forced to go along with the imposition of the business structure of the business that hired me on the basis of that structure alone. I can no longer earn stripes by gaining knowledge and experience in the field, I now have to earn stripes by brown-nosing. And this feels wrong to me. But it is the way it is, and that’s just life in the “real world”: if you want to work for a company as an employee (as per your signature on a Form W-4) and be its b---- then get in the kitchen, shut up, and make the man a sandwich. Do it now, grunt, or every second the man has to wait it’s another dollar taken out of your bonus! There is simply no ability to put soul into this. My desire to be motivated to succeed and to do well tends to be based upon the success of the business and upon the quality and world-changing impact of the business’s product. Instead, as the grunt, it becomes based upon the success of my boss, upon the quality and performance of my duties, and upon my compliance to allow the boss to dictate the measurements of “quality”, which if it’s right then it’s opportunity for me to learn, but if it’s completely wrong it’s the active practice of ritualistically worshipping Satan. And this is one area in which I tend to explode in disgust. Tact and self-control have gotten a lot better over the years, but it’s still an area for improvement.

1008337_607242282626833_1150274984_o2) My history precedes me. Life has been a long journey of learning about myself, about other people, about corporations, about all kinds of things, and in this journey I’ve suffered an awful lot of failures. Failures are success stories because you learn from them—well, that’s nice, except that my history precedes me. And at 36 years old—wait, I’m 36, right?—I’ve begun to get pessimistic. If a job didn’t work out over here, and another job didn’t work out over there, who really do I want to work for? I’ve learned that I do not want to work for someone I admire because rejection as the grunt hurts a lot. Does this mean I want to work for someone I dislike? Of course not. Well if I choose not to have an opinion about who I work for, I run the risk of lacking loyalty and, well, soul in my work, but at least I wouldn’t be disloyal either.

So I end up here. I’ve been here before. It has never gone well—on the other hand, I apparently end up back here anyway, and it’s actually a little more peaceful here. The only thing missing is soul. Soul is passion. Soul has made things miserable in the past, where applying it was in hopes of making things wonderful. So if soul has made things go bad, why? Is this an attitude problem, a skill problem, a focusing problem, an approach problem, a setting problem, or a target of interest problem? Discipline? Maybe I'm just imbalanced. Could it be all of these? Yes, I suppose it could. So perhaps I should drum up some new rules to consider on this, based on these things:

Attitude – Be passionate with a proper attitude among others. Does my passion make others feel kicked around or like they’re being told they’re inferior? Too selfish. I should find out if others have similar passions; if so, I should refocus my passion on enjoying their similar passions when I am with them. Also, I should always be appreciating the practices of the team that work, and not get too hung up on practices that don’t work, because the ones that don’t work always have someone’s ego associated with them and they were perhaps passionate about setting them up. They had a passion that I should appreciate even if the output didn’t match mine.

Goals – I need goals to have an objective I can target and pursue while harnessing the power of passion and skill. Goals should derive out of an attitude check, not the desire to make money or be a boss. Neither making more money nor being the boss reduces stress--in fact, it makes it worse. Living a simple life is remarkably mind-cleansing, and I can only imagine what kind of stress a top-tier leader must have to go through. On the other hand, if money is seen as a tool to do the world good, a means to make the world a better place, and likewise being a leader is seen as an opportunity to make the business owners or executives happier and doing that while accomodating the needs of the grunts is looked on as a welcome opportunity, these are not bad drivers for goals. Neither is passion a bad driver (unless my passion is in basket weaving). So setting up goals pertinent to technology skills growth is certainly ideal, especially in a field where technology is always evolving. As a Christian, I also have some eternal goals; as one who believes in God I desire to make whatever I do pleasing to Him. If I had a family and my interests were in making a wife happy, again, making more money is not a goal in itself but setting up a budget and adhering to it might be. I might like to marry someday; I should start practicing budgeting. I also want to have some residual income flowing in; I should set goals to write a book, or write software that I can sell. Figuring out which goals to prioritize so that goals that become projects can see the light of day is a lot of work but necessary.

Skill – A passionate web developer should always be learning, and should always be practicing by either looking for problems to solve or creating problems in a sandbox at home that can be solved safely there. If there’s not enough time to learn and to practice, perhaps there isn’t enough focus on the passion! I’ve also found that it’s easy to build up a surface-level understanding of development concepts or tools, but can be difficult to master them. Master them.

Focus – I’m guilty of not being able to focus. I tend to have A.D.D., but on the other hand I can get around this tendency by adjusting my environment (choosing or making a clean place) and restructuring my priorities and the sequence of approaching them. Some people have tried The Pomodoro Technique to deal with time management, and have had some great success.

Approach – It is not enough to tackle a passion. I need strategy. And my work needs to integrate cohesively with others’ passionate output. What happens when you get five musicians in a room and so they practice and play a solo—all at once? You end up with chaos. The whole notion of “you need strategy”, however, is too vague to demonstrate in words because every situation is completely unique to the problems and personalities involved. I just need to use my head—not just left-brain logic but also right-brain intuition.

Setting – Having a passion in Objective-C is great if you’re working at Apple or in an Apple-oriented shop. It’s not so great if you’re in a Java or .NET oriented shop. This example is too obvious, though. Being passionate about restaurant point-of-sale systems is borderline dangerous if I'mtrying to lose weight and the workplace is Dunkin’ Donuts. Passions + Setting should not conflict with Goals.

Creativity – Creativity is that necessary component that allows me to stop stomping around asking everyone "hey I need ideas so I can build upon my passions, help me?" I'm guilty of doing that and it's pretty lame. Creativity is itself a skill that needs to be developed. Passion and creativity build upon each other. If I'm not creative enough I should get more passionate. If I'm not passionate enough but have some seedlings of creativity sprouting up, I should keep building on that creativity, passionately.

Target of Interest – For years, I’ve had Ruby on Rails books sitting around, and have had Ruby installed, but I never made Ruby on Rails my passion. How likely is it that my whole world would be turned upside down if I dropped ASP.NET and focused that on Ruby on Rails? I think it would be pretty hairy. Rubyists would argue that I should, that it could only be better. But I am confident in the capacities of ASP.NET MVC, and so my target of interest in my passion is well chosen I believe. But what about other passions? Keep exploring. Find something that clicks. Or, fall in love again with what is proving already to work for me (i.e. ASP.NET MVC).

Discipline – I suppose one of my biggest problems has been that I tend to get A.D.D. when I read or when I do pretty much anything. At home I have 19+ personal projects lined up and the list is so overwhelming I end up hunkering down and playing PC games instead. To address this problem, I have had to prioritize my projects, and I printed this out and taped it to my computer monitor at home:   

  NO MORE GAMES
  UNTIL I ACCOMPLISH MORE GOALS!!

I need to build up a greater curiosity and interest in the practices that I work with. Programming used to be fun. That and more should still be fun – it should be fulfilling. But regardless of how I feel, I should be driven by wisdom, and by the desire to be a greater, more proficient, and more respectable person in the field. 

Balance – Passion without balance makes people rich but it is more likely to make a person crazy. I don't mean to necessarily clock out at 5pm or 6pm. If I get back on my computer at home I've found no balance. I need to step away from the computer. Go to the gym. Go swimming. Go for a walk. Meet up with friends. Study the Bible and pray (yes I'm like that – or want to be). Go to a sports game. Actually I'm not into sports .. maybe I should go anyway, and bring a friend. Go spend a weekend up at the Grand Canyon. Stop playing PC games for free time. Have more responsibilities, outside of "personal projects". Take care of people. Become more well-rounded.

At the end of it all I should keep circling back around to all of these, giving special heed to attitude and balance. These are the elements that are making me a more whole person, while becoming a better, more mature person in the field of software and web development.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Career | Health and Wellness

Welcome To Our Beautiful Home, Excuse The Legacy Mess Everywhere

by Jon Davis 7. December 2012 09:42

Rant time again. Oh the things that get me worked up to finally blog again.

Office 365 - Outlook.com conflict
Sadly, while this image I found on Google Images represents the messaging I saw, it does not represent what I saw. My experience wasn’t quite so pretty.

First I couldn’t access my mail that I was getting notified about from Live Messenger, Microsoft’s site said that Office 365 users can’t upgrade to Office.com.

This account can't be used to access Outlook.com

You're currently signed in with an Office 365 email account, which can't be used with Outlook.com. Please click here to sign out of your Office 365 account, then use another Microsoft account to sign in to Outlook.com (for example, your hotmail.com, live.com, or msn.com account). 

Well that would be all fine and fair, except that I had no recollection of having anything to do with Office 365, and I was still getting these “new mail” alerts from Live Messenger every time I log into (not boot, log into) my work computer. There was no way to fix the problem. If I went to Office 365’s web site and attempted to sign in, I got in a bizarre redirect loop. Clearly I had no actual Office 365 account because I never got involved with Office 365, but somewhere, somehow, a flag got buried in my profile that identified me as an Office 365 user when attempting to get into Hotmail / Outlook.com.

Then yesterday or this morning I got a splashy marketing email from Microsoft saying that my account was ready to upgrade to Outlook.com. This email comes just hours after I was told I couldn’t upgrade. I figured there was maybe a 5% chance that the email was accurate in its portrayal of my account being upgradeable, that something had changed in my account overnight and that Microsoft was so proud of itself they decided to make the email look splashy. Of course, I was right, I still got this ridiculous message saying Office 365 users can’t upgrade.

Again, I wasn’t an Office 365 user. I may have poked at it once to see what it was. Since I could find no recourse, I went about deleting my profile so I could re-create it. I read the up-front warnings about account deletion carefully before proceeding, being sure it didn’t say that I wouldn’t be able to recreate the account with the same e-mail address ever again or for some long period of time. I saw nothing like that. So *click*, gone, deleted. So as I go to recreate it, it tells me the e-mail address is in use. Great. I did some Googling and discovered in an Xbox forum that there is a 90-day waiting period, at least on Xbox, before the email address that was used to originally create the now-deleted account can be reused to create another one.

Curious as to whether some of the older link-ins to the “create a new [Microsoft/Xbox/whatever] account” might skip over the locked email matter, which of course everything I tried still failed, I couldn’t help but note that these link-ins are still calling new accounts “MSN Hotmail” accounts, and with some really old graphics and formatting. REALLY, Microsoft?

image

Seriously, Microsoft, can’t you get your ducks in order?!

1. Don’t ever lock out a group of users from accessing a service without providing a means for those users to remove themselves from that group. I was “an Office 365” account user, but I didn’t want to be, I had no intention to be, I don’t remember how I become one--perhaps play-testing what was out there, but I had already used this account to use Windows Live Messenger, Hotmail (hence my notifications), and a Windows 8 login profile (which has now been destroyed due to this Office 365 horsecrap, thanks!), and instead of deleting my account I should have been able to get into some kind of obvious interface and just drop that incompatible feature.

2. Better yet, with all the consolidations you guys have done for all the IDs as “Microsoft Accounts”, at least in promises throughout the news media, you should not have allowed an incompatibility to exist! Instead there should have been a conversion process that would immediately take place. But no, you had to go and BLAME THE USER, spin around, and walk away! I get better treatment at the Department of Motor Vehicles!

3. Microsoft, if someone is deleting a user profile, tell them up front, “You will not be able to recreate an account with the e-mail address you used to create this account for 90 days” right in there next to the delete button! I had to have it already deleted before I considered sleuthing forums (!!) to find a hint at the 90 days, and I still don’t know for sure if the 90 days on Xbox accounts translates to 90 days for my account.

4. Remember Passport, Microsoft? I mean, it is the original branding of what is now Microsoft Accounts. Do you remember? Well, a lot of customers do, and they're treated to a miserable experience when they go to http://passport.net/ in Chrome. Not to mention, when they go to sign up for a Microsoft Accounts account there, they get the experience of jumping around between three or four brands and never land on Microsoft Accounts. So, again, did you forget about Passport.net, Microsoft?

5. By the way, signing in with my personal account into answers.microsoft.com, I was greeted with this, and it never went away. Ever.

image

Microsoft, your Microsoft Accounts, all of its forms, is a product. Your product is of immensely poorly constructed quality it’s hard to know where to begin. How is it that I got into a redirect loop when I attempted to access the Office 365 web site to try to find something to turn off or remove myself from? Why is it that depending on which community web site I’m accessing, when I access the same “log in” dialog and choose to create a new account I am presented with such the disgusting legacy of an “MSN Hotmail” account setup? Microsoft, all of your new users using this navigation path are going to see that crap. Do you want to relinquish the marketing verbiage of MSN Hotmail or not? If not, why then would you allow these legacy interfaces to be so commonly exposed to the general public? It would be something else entirely if I was trying to access some rarely used feature of Microsoft’s web site, but no, this was a navigation path that Microsoft would probably hope every last human being with an Internet connection would follow.

Don’t get me wrong, I understand that Microsoft has to take a phased approach to this stuff; let’s roll out new marketing changes and new reorganizations in phases, people will just suffer through the legacy stuff for a while. I call bullcrap!! This is 2012, business practices we’ve settled for need to change. If you want to output something of quality, you don’t launch a hybrid mess of ancient and new and call it “new”, you call it “hybrid mess of ancient and new”! Who do you think you’re fooling, Microsoft, when you greet people with “log into your Microsoft Account” with elegant branding but then as soon as they begin setting up their profile they get this 2005-esque MSN Hotmail experience? It was the same user story! Creating a Microsoft Account to access a service/community. How many Microsoft Account stories are there, really? I count five: Log in, log out, create a new account, manage your account, delete your account. Yet it seems no one managing Microsoft Account considered that it might make a poor level of quality to have two completely different branding experiences while navigating through any of these puny five stories. As huge and important as Microsoft Accounts is ... Really, Microsoft?!

Side note: I make such rants because I’m hoping Microsoft is listening, not because I think people should walk away from Microsoft services because I absolutely don’t think that. Microsoft needs to clean this stuff up. This has been going on with Passport / MSN / Live / Microsoft Accounts forever. (Speaking of Passport, have you navigated to http://passport.net in Chrome lately? Its layout is so broken it’s tempting to think it’s not alive anymore.) I am also hoping everyone who is not at Microsoft (basically almost everyone who reads my blog) can take a lesson from this about user experiences and what not to settle for.

Tags:

Canvas & HTML 5 Sample Junk

by Jon Davis 27. September 2012 15:48

Poking around with HTML 5 canvas again, refreshing my knowledge of the basics. Here's where I'm dumping links to my own tinkerings for my own reference. I'll update this with more list items later as I come up with them.

  1. Don't have a seizure. http://jsfiddle.net/8RYtu/22/
    HTML5 canvas arc, line, audio, custom web font rendered in canvas, non-fixed (dynamic) render loop with fps meter, window-scale, being obnoxious
  2. Pass-through pointer events http://jsfiddle.net/MtGT8/1/
    Demonstrates how the canvas element, which would normally intercept mouse events, does not do so here, and instead allows the mouse event to propagate to the elements behind it. Huge potential but does not work in Internet Explorer.
  3. Geolocation sample. http://jsfiddle.net/nmu3x/4/ 
    Nothing to do with canvas here. Get over it.
  4. ECMAScript 5 Javascript property getter/setter. http://jsfiddle.net/9QpnW/8/
    Like C#, Javascript now supports assigning functions to property getters/setters. See how I store a value privately (in a closure) and do bad by returning a modified value.

Automatically Declaring Namespaces in Javascript (namespaces.js)

by Jon Davis 25. September 2012 18:24

Namespaces in Javascript are a pattern many untrained or undisciplined developers may fail to do, but they are an essential strategy in retaining maintainability and avoiding collisions in Javascript source.

Part of the problem with namespaces is that if you have a complex client-side solution with several Javascript objects scattered across several files but they all pertain to the same overall solution, you may end up with very long, nested namespaces like this:

var ad = AcmeCorporation.Foo.Bar.WidgetFactory.createWidget('advertisement');

I personally am not opposed to long namespaces, so long as they can be shortened with aliases when their length gets in the way.

var wf = AcmeCorporation.Foo.Bar.WidgetFactory;
var ad = wf.createWidget('advertisement');

The problem I have run into, however, is that when I have multiple .js files in my project and I am not 100% sure of their load order, I may run into errors. For example:

// acme.WidgetFactory.js
AcmeCorporation.Foo.Bar.WidgetFactory = {
createWidget: function(e) {
return new otherProvider.Widget(e);
}
};

This may throw an error immediately because even though I’m declaring the WidgetFactory namespace, I am not certain that these namespaces have been defined:

  • AcmeCorporation
  • AcmeCorporation.Foo
  • AcmeCorporation.Foo.Bar

So again if any of those are missing, the code in my acme.WidgetFactory.js file will fail.

So then I clutter it with code that looks like this:

// acme.WidgetFactory.js
if (!window['AcmeCorporation']) window['AcmeCorporation'] = {};
if (!AcmeCorporation.Foo) AcmeCorporation.Foo = {};
if (!AcmeCorporation.Foo.Bar) AcmeCorporation.Foo.Bar = {};
AcmeCorporation.Foo.Bar.WidgetFactory = {
createWidget: function(e) {
return new otherProvider.Widget(e);
}
};

This is frankly not very clean. It adds a lot of overhead to my productivity just to get started writing code.

So today, to compliment my using.js solution (which dynamically loads scripts), I have cobbled together a very simple script that dynamically defines a namespace in a single line of code:

// acme.WidgetFactory.js
namespace('AcmeCorporation.Foo.Bar');
AcmeCorporation.Foo.Bar.WidgetFactory = {
createWidget : function(e) {
return new otherProvider.Widget(e);
}
};
/* or, alternatively ..
namespace('AcmeCorporation.Foo.Bar.WidgetFactory');
AcmeCorporation.Foo.Bar.WidgetFactory.createWidget = function(e) {
return new otherProvider.Widget(e);
};
*/

As you can see, a function called “namespace” splits the dot-notation and creates the nested objects on the global namespace to allow for the nested namespace to resolve correctly.

Note that this will not overwrite or clobber an existing namespace, it will only ensure that the namespace exists.

a = {};
a.b = {};
a.b.c = 'dog';
namespace('a.b.c');
alert(a.b.c); // alerts with "dog"

Where you will still need to be careful is if you are not sure of load order then your namespace names all the way up the dot-notation tree should be namespaces alone and never be defined objects, or else assigning the defined objects manually may clobber nested namespaces and nested objects.

namespace('a.b.c');
a.b.c.d = 'dog';
a.b.c.e = 'bird';
// in another script ..
a.b = { 
c : {
d : 'cat'
}
};
// in consuming script / page
alert(a.b.c); // alerts [object]
alert(a.b.c.d); // alerts 'cat'
alert(a.b.c.e); // alerts 'undefined'

Here’s the download if you want it as a script file [EDIT: the linked resource has since been modified and has grown significantly], and here is its [original] content:

function namespace(ns) {
var g = function(){return this}();
ns = ns.split('.');
for(var i=0, n=ns.length; i<n; ++i) {
var x = ns[i];
if (x in g === false) g[x]={}; 
g = g[x];
}
} 

The above is actually written by commenter "steve" (sjakubowsi -AT- hotmail -dot-com). Here is the original solution that I had come up with:

namespace = function(n) {
var s = n.split('.');
var exp = 'var ___v=undefined;try {___v=x} catch(e) {} if (___v===undefined)x={}';
var e = exp.replace(/x/g, s[0]);
eval(e);
for (var i=1; i<s.length; i++) {
var ns = '';
for (var p=0; p<=i; p++) {
if (ns.length > 0) ns += '.';
ns += s[p];
}
e = exp.replace(/x/g, ns);
eval(e);
}
}

Currently rated 4.0 by 2 people

  • Currently 4/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , ,

Javascript | Pet Projects

Why jQuery Plugins Use String-Referenced Function Invocations

by Jon Davis 25. September 2012 10:52

Some time ago (years ago) I cobbled together a jQuery plug-in or two that at the time I was pretty proud of, but in retrospect I’m pretty embarrassed. One of these plugins was jqDialogForms. The embarrassment was not due to its styling—the point of it was that it could be skinnable, I just didn’t have time to create sample skins—nor was the embarrassment due to the functional conflict with jQuery UI’s dialog component, because I had a specific vision in mind which included modeless parent/child ownership, opening by simple DOM reference or by string, and automatic form serialization to JSON. Were I to do all this again I would probably just extend jQuery UI with syntactical sugar, and move form serialization to another plugin, but all that is a tangent from the purpose of this blog post. My embarassment with jqDialogForms is with the patterns and conventions I chose in contradiction to jQuery’s unique patterns.

Since then I have abandoned (or perhaps neglected) jQuery plugins development, but I still formed casual and sometimes uneducated opinions along the way. One of the patterns that had irked me was jQuery UI’s pattern of how its components’ functions are invoked:

$('#mydiv').accordion( 'disable' );
$('#myauto').autocomplete( 'search' , [value] );
$('#prog').progressbar( 'value' , [value] );

Notice that the actual functions being invoked are identified with a string parameter into another function. I didn’t like this, and I still think it’s ugly. This came across to me as “the jQuery UI” way, and I believed that this contradicted “the jQuery way”, so for years I have been baffled as to how jQuery could have adopted jQuery UI as part of its official suite.

Then recently I came across this, and was baffled even more:

http://docs.jquery.com/Plugins/Authoring

Under no circumstance should a single plugin ever claim more than one namespace in the jQuery.fn object.

(function( $ ){

  $.fn.tooltip = function( options ) { 
    // THIS
  };
  $.fn.tooltipShow = function( ) {
    // IS
  };
  $.fn.tooltipHide = function( ) { 
    // BAD
  };
  $.fn.tooltipUpdate = function( content ) { 
    // !!!  
  };

})( jQuery );

This is a discouraged because it clutters up the $.fn namespace. To remedy this, you should collect all of your plugin’s methods in an object literal and call them by passing the string name of the method to the plugin.

(function( $ ){

  var methods = {
    init : function( options ) { 
      // THIS 
    },
    show : function( ) {
      // IS
    },
    hide : function( ) { 
      // GOOD
    },
    update : function( content ) { 
      // !!! 
    }
  };

  $.fn.tooltip = function( method ) {
    
    // Method calling logic
    if ( methods[method] ) {
      return methods[ method ].apply( this, Array.prototype.slice.call( arguments, 1 ));
    } else if ( typeof method === 'object' || ! method ) {
      return methods.init.apply( this, arguments );
    } else {
      $.error( 'Method ' +  method + ' does not exist on jQuery.tooltip' );
    }    
  
  };

})( jQuery );

// calls the init method
$('div').tooltip(); 

// calls the init method
$('div').tooltip({
  foo : 'bar'
});

// calls the hide method
$('div').tooltip('hide'); 
// calls the update method
$('div').tooltip('update', 'This is the new tooltip content!'); 

This type of plugin architecture allows you to encapsulate all of your methods in the plugin's parent closure, and call them by first passing the string name of the method, and then passing any additional parameters you might need for that method. This type of method encapsulation and architecture is a standard in the jQuery plugin community and it used by countless plugins, including the plugins and widgets in jQueryUI.

What baffled me was not their initial reasoning pertaining to namespaces. I completely understand the need to keep plugins’ namespaces in their own bucket. What baffled me was how this was considered a solution. Why not simply use this?

$('#mythingamajig').mySpecialNamespace.mySpecialFeature.doSomething( [options] );

To see about proving that I could make both myself and the “official” jQuery team happy, I cobbled this test together ..

(function($) {
    
    $.fn.myNamespace = function() {
        var fn = 'default';
        
        var args = $.makeArray(arguments);
        if (args.length > 0 && typeof(args[0]) == 'string' && !(!($.fn.myNamespace[args[0]]))) {
            fn = args[0];
            args = $(args).slice(1);
        }
        $.fn.myNamespace[fn].apply(this, args);
    };
    $.fn.myNamespace.default = function() {
        var s = '\n';
        var i=0;
        $(arguments).each(function() {            
            s += 'arg' + (++i).toString() + '=' + this + '\n';
        });
        alert('Default' + s);
        
    };
    $.fn.myNamespace.alternate = function() {
        var s = '\n';
        var i=0;
        $(arguments).each(function() {            
            s += 'arg' + (++i).toString() + '=' + this + '\n';
        });
        alert('Alternate' + s);
        
    };

    $().myNamespace('asdf', 'xyz');
    $().myNamespace.default('asdf', 'xyz');
    $().myNamespace('default', 'asdf', 'xyz');
    $().myNamespace.alternate('asdf', 'xyz');
    $().myNamespace('alternate', 'asdf', 'xyz');
    
})(jQuery);

Notice the last few lines in there ..

    $().myNamespace('asdf', 'xyz');
    $().myNamespace.default('asdf', 'xyz');
    $().myNamespace('default', 'asdf', 'xyz');
    $().myNamespace.alternate('asdf', 'xyz');
    $().myNamespace('alternate', 'asdf', 'xyz');

When this worked as I hoped I originally set about making this blog post be a “plugin generator plugin” that would make plug-in creation really simple and also enable the above calling convention. But when I got to the some passing tests, adding a few more tests I realized I had failed to notice a critical detail: the this context, and chainability.

In JavaScript, navigating a namespace as with $.fn.myNamespace.something.somethingelse doesn’t execute any code within the dot-notation. Without the execution of functional code, there can be no context for the this context, which should be the jQuery-wrapped selection, and as such there can be no context for the return chainable object. (I realize that it is possible to execute code with modern JavaScript getters and setters but all modern browsers don’t support getters and setters and all commonly used browsers certainly don’t.) This was something that I as a C# developer found easy to forget and overlook, because in C# we take the passing around of context in property getters for granted.

Surprisingly, this technical reasoning for the string-based function identifier for jQuery plug-in function invocations was not mentioned on the jQuery Plugins documentation site, nor was it mentioned in the Pluralsight video-based training I recently perused. It seemed like what Pluralsight’s trainer was saying was, “You can use $().mynamespace.function1()” but that’s obscure! Use a string parameter instead!” And I’m like, “No, it is not obscure! Calling a function by string is obscure because you can’t easily identify it as a function reference distinct from a parameter value!”

The only way to retain the this context while removing the string-based function reference is to invoke it along the way.

$().myNamespace().myFunction('myOption1', true, false);

Notice the parenthesis after .myNamespace. And that is a wholly different convention that few in jQuery-land are used to. But I do think that it is far more readable than ..

$().myNamespace('myFunction', 'myOption1', true, false);

I still like the former, it is more readable, and I remain unsure as to why the latter is the accepted convention over the former, but my guess is that a confused user might try to chain back to jQuery right after .myNamespace() rather than after executing a nested function. And that, I suppose, demonstrates how the former pattern is contrary to jQuery’s chainability design of every().invocation().just().returns().jQuery.

Currently rated 5.0 by 2 people

  • Currently 5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , ,

Javascript

Personal Status Update [September 19, 2012]

by Jon Davis 19. September 2012 12:16

I know, I know, I promised I’d pick up some steam on my blog and then suddenly I got quiet. Here’s the deal .. when I started picking up a bit more steam a couple months ago, I was looking for my next permanent job. Well, I found it. And, I’m kind of floored by its potential, as well as the preexisting knowledge that I’d be surrounded by some amazing people as well as be given high expectations of technical and professional maturity. This is my dream job.

I’ll post more but for now I need to catch up on some Pluralsight training, perhaps (finally) get some Microsoft certifications (MCPD and perhaps at some point even MCM), get a couple project successes behind me, and ultimately prove my worth to my employer. They’re watching. o.O

Oh, by the way, remember using.js? I created it waaay back in 2008 and it was surprisingly popular. Anyway, today I finally moved it over to GitHub. https://github.com/stimpy77/using.js Recent Javascript design patterns training on Pluralsight as a refresher got me motivated to blow the dust off of it and put it in a more publically enjoyable space. Cheers.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

ASP.NET MVC 4: Where Have All The Global.asax Routes Gone?

by Jon Davis 23. June 2012 03:03

I ran into this a few days back and had been meaning to blog about it, so here it finally is while it’s still interesting information.

In ASP.NET MVC 1.0, 2.0, and 3.0, routes are defined in the Global.asax.cs file in a method called RegisterRoutes(..).

mvc3_register_routes

It had become an almost unconscious navigate-and-click routine for me to open Global.asax.cs up to diagnose routing errors and to introduce new routes. So upon starting a new ASP.NET MVC 4 application with Visual Studio 11 RC (or Visual Studio 2012 RC, whichever it will be called), it took me by surprise to find that the RegisterRoutes method is no longer defined there. In fact, the MvcApplication class defined Global.asax.cs contains only 8 lines of code! I panicked when I saw this. Where do I edit my routes?!

mvc4_globalasax

What kept me befuddled for far too long (quite a bit longer than a couple seconds, shame on me!) was the fact that these lines of code, when not actually read and only glanced at, look similar to the Application_Start() from the previous iteration of ASP.NET MVC:

mvc3_globalasax

Eventually I squinted and paid closer attention to the difference, and then I realized that the RegisterRoutes(..) method is being invoked still but it is managed in a separate configuration class. Is this class an application settings class? Is it a POCO class? A wrapper class for a web.config setting? Before I knew it I was already right-clicking on RegisterRoutes and choosing Go To Definition ..

mvc4_globalasax_gotodef

Under Tools –> Options –> Projects and Solutions –> General I have Track Active Item in Solution Explorer enabled, so upon right-clicking an object member reference in code and choosing “Go To Definition” I always glance over at Solution Explorer to see where it navigates to in the tree. This is where I immediately found the new config files:

mvc4_app_start_solex

.. in a new App_Start folder, which contains FilterConfig.cs, RouteConfig.cs, and BundleConfig.cs, as named by the invoking code in Global.asax.cs. And to answer my own question, these are POCO classes, each with a static method (i.e. RegisterRoutes).

I like this change. It’s a minor refactoring that cleans up code. I don’t understand the naming convention of App_Start, though. It seems like it should be called “Config” or something, or else Global.asax.cs should be moved into App_Start as well since Application_Start() lives in Global.asax.cs. But whatever. Maintaining configuration details in one big Global.asax.cs file gets to be a bit of a pain sometimes especially in growing projects so I’m very glad that such configuration details are now tucked away in their own dedicated spaces.

I am curious but have not yet checked to determine whether App_Start as a new ASP.NET folder has any inherent behaviors associated with it, such as for example post-edit auto-compilation. I’m doubtful.

In future blog post(s), perhaps my next post, I’ll go over some of the other changes in ASP.NET MVC 4.

Currently rated 3.8 by 26 people

  • Currently 3.769231/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

ASP.NET | Web Development


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  October 2014  >>
MoTuWeThFrSaSu
293012345
6789101112
13141516171819
20212223242526
272829303112
3456789

View posts in large calendar