Prelude to a Blogging Reset

by Jon Davis 19. October 2019 22:55

I started blogging in around 2002/2003 when Blogger and Radio UserLand were inspirations for me to create a desktop blogging app I called PowerBlog. 

Archive.org: https://web.archive.org/web/20031010203956/http://powerblog.net/
PowerBlog (beta release)

My motivations were three:

  1. To grow my skill set as a software developer.
  2. To see if I could maybe make a living building and selling home-grown software. (PowerBlog was available for sale, but didn't sell.)
  3. To write, and write, and write. My brain was churning, and I wanted to divulge my thoughts out loud. 

PowerBlog was written in VB6. It consisted of an instance of the Internet Explorer WebBrowser COM object to drive a true WYSIWYG experience, design philosophies similar to those of Outlook Express (an e-mail and newsgroup client), and a publish mechanism component model including support for user-defined publishing scripts using the Active Scripting (VBScript/JScript) runtime component. When .NET v1.1 came out, I rewrote PowerBlog, reusing the same components from Microsoft but adding more features like XML-RPC--by which I built my own client/server libraries with self-documenting HTML output similar to ASP.NET's .asmx behavior--and style synchronization so that the user could edit in the same CSS style as the page itself. I tried to sell PowerBlog for $10 to $20 per download. It didn't sell. That's kind of okay, the value of my efforts was found in my skills ramp-up and in my having a tool for my own writing.

PowerBlog got Microsoft's attention. Three things happened, all of a sudden, with Microsoft: 1) the Windows Live initiative kicked off blogging from their web site (which they've since shuttered), 2) Microsoft interviewed me, and 3) Windows Live Writer was written, stealing most of the good ideas I had in PowerBlog (admittedly tidbits of the ideas of PowerBlog were stolen from another app called W.bloggar). I got nothin for it but some bragging rights for the ideas. 

I was broke. My motivators to go out on my own (taxes-related need to go offline to do some research) subsided. So I stopped development and went back out into the workforce. When .NET 2.0 came out along with a new version of Internet Explorer, it broke PowerBlog permanently. Microsoft published WebBrowser components that directly collided with the WebBrowser components I had created for PowerBlog. So PowerBlog isn't available today--it will just crash for you if you try--and I couldn't even build my own source code anymore, without finding an old Windows XP box with an old version of Visual Studio and .NET Framework 1.1\. It just wasn't worth the hassle, so I never bothered. Maintenance was immediately halted. PowerBlog.net went offline. I didn't much care. I did need to keep blogging and writing, so eventually I found BlogEngine.net and transferred my technical blog content to it. I updated the blog engine code once or twice as it evolved, but eventually I just let it sit dormant.

So that's where my tech blog instance stands today. http://www.jondavis.net/techblog/ runs on a decade old version of BlogEngine.net which I've tailored a bit to include banned IPs and some home-grown CAPTCHA functionality for the comments. I've since looked at a few other blogging engines over the years I've considered adopting, most notably I remember looking at NBlog, which gave me the most control but required me to implement it in code and didn't sufficiently demonstrate black box modularity, and Ghost, which would grow me in NodeJS but ticked me off in its stance on only using Markdown and never WYSIWYG HTML editing. 

Meanwhile, the whole time, developers and non-developers alike have pooh-poohed the interestingness of blogs, blogging software, blogging strategies, blogging services, etc. I can certainly see why. At its most basic level, blogs are little more than single-table CRUD apps. You have an index of recent blogs, you have a detail view of an individual blog post, you have an editor and creator page, and you can delete a blog post. The content of a blog post is, to the casual observer, nothing but four components: title, body, author, and publish date. If that's what you think a blog is, you're naive, and likely didn't bother reading this far.

At the very least, on the technical side, blogging brought about a few innovations that revolutionized the Internet, including: 

  • XML-RPC (SOAP's midget predecessor)
  • RSS
  • Publish pings
  • Atom
  • OPML
Not to mention less specific innovations, like end user accessible management of slugs (URL paths) for enhancing SEO. And the whole notion of blogging became the basis of the World Wide Web's notion of community interconnectivity. The broad set of Web sites was social networking, before social networks took over social networking. Blog sites were known, and linked to each other. Followers of blogs created view counts that filled the hole that Facebook post likes fill today. 
 
This is perhaps why dev.to is such a success story; it's like a Facebook group, for coders, where every single post is a blog article.
 
Anyway, so now I sit here typing into the ten year old deployment of BlogEngine.net on my resurrected tech blog, wondering what my next steps should be. Well, I'm pretty sure I want to create my own blog engine again, and build on it. Yes, it's a two decades stale idea, but here are my motives:
 
  • My blog philosophies still differ somewhat from the offerings currently available (although not far at all from BlogEngine.net)
  • I want to keep growing my technical skill set (not just write about them). There are revised web and software standards I need to freshen up on, and apply my learnings. Creating a blog site has always been a good mechanism to sample and apply the most basic web tech skills.
  • I need anything I deploy to a technical blog to portray my own personal portfolio, including the site on which my writings are displayed
  • I still like the idea of taking something I myself created and putting it out on github as a complete package and perhaps managing a hosted instance of it, similar to WordPress.
All this said, I don't think I'll be doing a full-blown BlogEngine.net equivalence. It will be a simple thing, and from that I'll take some ideas to glean from and apply them to other software efforts I'll be working on. 
 
But a few things I will point out about the technical strategies and approaches I intend to implement:
  1. Rich front-end libs (React, Vue, Blazor, etc)?? While I might take, say, Ghost's approach in having some rich interactivity for the author's experience, the end user (reader) experience cannot be forced to utilize them. A blog page must be rich in HTML semantics and index well SEO-wise for the web at large, so the HTTP response payload cannot be Javascript stub placeholders for dynamic content.
  2. Componentization is more important than anything else. I do intend to iterate over a lot of the features this old instance of BlogEngine.net has as well as current blog engines, and figure out ways to spoonfeed those features without embedding them deeploy in interdependencies. Worst thing I could do is embed all the bells and whistles directly into ASP.NET Razor Pages themselves. I intend to apply decorator pattern principles, perhaps applying sidecar architecture for some components where appropriate. Right now I'm looking at using Identity Server 4, for example, as a sidecar architecture for author identity and privileges. Speaking of which,
  3. Everything must be readily run-anywhere, black-box-capable deployable as ad hoc code launched with self-hosting, Azure web app service (PaaS), Docker-contained, or VM (IaaS). I also intend to set up a hosted instance and offer limited blog hosting for free, extended features for a fee. In this sense, WordPress is a competing platform.
  4. "Extended features, like ..?"  Premium templates. Large storage hosting. And premium add-on features yet to be defined.
  5. The templates can be based on Razor, but "pages" cannot be tightly defined. If this platform grows up, it needs to be a more than a blog, it needs to be a mini-CMS (content management system).
  6. Social networking ecosystem is a mandatory consideration; full blogging must integrate with microblogging and multi-contributor cross-pollination including across disparate sites and systems; more on this later, but imagine trusted Twitter users liking Facebook posts.
Again, yes, I am unpacking a two decades old solution proposal to a Web 1.0 / 2.0 problem, but that problem never really went away, it just became less prominent. I really don't anticipate this blowing up to being much of anything. At a minimum, I need to convert all of my content here on this technical blog (at http://www.jondavis.net/techblog/) to a home-grown platform of my own making, even if after doing that I call it done and walk away.
 
Another motivator for getting that done is the fact that just writing this, I'm writing with very tiny text on a very high resolution screen, and you're probably squinting reading this, too, if you're reading it from my web site, and if you actually made it this far. Again, yes, I could simply just replace the template and maybe refresh the version of BlogEngine.net which I'm using. But, why do that, when I call myself a 23-years-plus experienced web development veteran and could just build my own? Am I really so useless in my aging as to be unable to build something of my own again? Not like a blog engine will get me a better job, duh, so much as the fact that using someone else's engine just plain looks bad on me. So, this is a lightweight test of mettle. I need to do this because I need to.
 
On a final note, I'll mention again what I'm focusing on first: "who am I"--authentication and authorization. I'm learning up on Identity Server 4. I'm still new to it, so this will take some time. Right now it looks like the black box version of my blog engine will come with an instance of Identity Server 4, based perhaps on ASP.NET Core Identity as the user store; developers can tweak it out to their heart's content. I'm still mulling over whether to just embed it into the blog app (underkill? too much in one?) or separate it out as a separate SSO-oriented web app (overkill? too many distributed parts for a mere blog?), but at this point I'll likely do the former for the black box download (source code included) and the latter for the hosted instance which I hope to set up and offer to people wordpress.com-style.

 

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Blog | General Technology | Web Development

Technology Status Update 2016

by Jon Davis 10. July 2016 09:09

Hello, peephole. (people.) Just a little update. I've been keeping this blog online for some time, my most recent blog entries always so negative, I keep having to see that negativity every time I check to make sure the blog's up, lol. I'm tired of it so I thought I'd post something positive.

My current job is one I hope to keep for years and years to come, and if that doesn't work out I'll be looking for one just like it and try to keep it for years and years to come. I'm so done with contracting and consulting (except the occasional mentoring session on code mentor -dot- io). I'm still developing, of course, and as technology is changing, here's what's up as I see it. 

  1. Azure is relevant. 
    image
    The world really has shifted to cloud and the majority of companies, finally, are offloading their hosting to the cloud. AWS, Azure, take your pick, everyone who hates Microsoft will obviously choose AWS but Azure is the obvious choice for Microsoft stack folks, there is nothing meaningful AWS has that Azure doesn't at this point. The amount of stuff on Azure is sufficiently terrifying in quantity and supposed quality enough to give me a thrill. So I'm done with hating on Azure, after all their marketing and nagging and pushing, Microsoft has crossed a threshold of market saturation that I am adequately impressed. I guess that means I have to be an Azure fan, too, now. Fine. Yay Azure, woo. -.-
  2. ASP.NET is officially rebooted. 
    image
    So I hear this thing called ASP.NET Core 1.0 formerly known as ASP.NET 5 formerly known as ASP.NET vNext has RTM'd, and I hear it's like super duper important. It snuck by me, I haven't mastered it, but I know it enought to know a few things:
    • It's a total redux by means of redo. It's like the Star Trek reboot except it’s smaller and there are fewer planets it can manage, but it’s exactly like the Star Trek reboot in that it will probably implode yours.
    • If you've built your career on ASP.NET and you want to continue living on ASP.NET's laurals, now is not the time to master ASP.NET 1.0 Core. Give it another year or two to mature. 
    • If you're stuck on or otherwise fascinated by non-Microsoft operating systems, namely Mac and Linux, but you want to use the Microsoft programming stack, you absolutely must learn and master ASP.NET Core 1.0 and EF7.
    • If all you liked from ASP.NET Core 1.0 was the dynamic configs and build-time transpiles, you don't need ASP.NET Core for that LOL LOL ROFLMAO LOL LOL LOL *cough*
  3. The Aurelia Javascript framework is nearly ready.
    image
    Overall, Javascript framework trends have stopped. Companies are building upon AngularJS 1.x. Everyone who’s behind is talking about React as if it was new and suddenly newly relevant (it isn’t new anymore). Everyone still implementing Knockout are out of the loop and will die off soon enough. jQuery is still ubiquitous and yet ignored as a thing, but meanwhile it just turned v3.

    I don’t know what to think about things anymore. Angular 2.0 requires TypeScript, people hate TypeScript because they hate transpilers. People are still comparing TypeScript with CoffeeScript. People are dumb. If it wasn’t for people I might like Angular 2.0, and for that matter I’d be all over AureliaJS, which is much nicer but just doesn’t have Google as the titanic marketing arm. In the end, let’s just get stuff done, guys. Build stuff. Don’t worry about frameworks. Learn them all as you need them.
  4. Node.js is fading and yet slowly growing in relevance.
    image
    Do you remember .. oh heck unless you're graying probably not, anyway .. do you remember back in the day when the dynamic Internet was first loosed on the public and C/C++ and Perl were used to execute from cgi-bin, and if you wanted to add dynamic stuff to a web site you had to learn Perl and maybe find Perl pearls and plop them into your own cgi-bin? Yeah, no, I never really learned Perl, either, but I did notice the trend, but in the end, what did C/C++ and Perl mean to us up until the last decade? Answer: ubiquitous availability, but not web server functionality, just an ever-present availability for scripts, utilities, hacks, and whatever. That is where node.js is headed. Node.js for anything web related has become and will continue to be a gigantic mess of disorganized, everyone-is-equal, noisily integrated modules that sort of work but will never be as stable in built compositions as more carefully organized platforms. Frankly, I see node.js being more relevant as a workstation runtime than a server runtime. Right now I'm looking at maybe poking at it in a TFS build environment, but not so much for hosting things.
    I will always have a bitter taste in my mouth with node.js after trying to get socket.io integrated with Express and watching the whole thing just crumble, with no documentation or community help to resolve it, and this happened not just once on the job (never resolved before I walked away) but also during a code-mentor mentoring session (which we didn't figure out), even after a good year or so of maturity of the platform after the first instance. I still like node.js but will no longer be trying to build a career on it.
  5. Pay close attention and learn up on Swagger aka OpenAPI. 
    image
    Remember when -- oh wait, no, unless you're graying, .. nevermind .. anyway, -- once upon a time something called SOAP came out and it came with it a self-documentation feature that was a combination of WSDL and some really handy HTML generated scaffolding built into web services that would let you manually test SOAP-based services by filling out a self-generated form. Well now that JSON-based REST is the entirety of the playing field, we need the same self-documention. That's where Swagger came in a couple years ago and everyone uses it now. Swagger needs some serious overhauling--someone needs to come up with a Swagger-compliant UI built on more modular and configurable components, for example--but as a drop-in self-documentation feature for REST services it fits the bill.
    • Swagger can be had on .NET using a lib called Swashbuckle. If you use OData, there is a lib called Swashbuckle.OData. We use it very, very heavily where I work. (I was the one who found it and brought it in.) "Make sure it shows up and works in Swagger" is a requirement for all REST/OData APIs we build now.
    • Swagger is now OpenAPI but it's still Swagger, there are not yet any OpenAPI artifacts that I know of other than Swagger. Which is lame. Swagger is ugly. Featureful, but ugly, and non-modular.
    • Microsoft is listed as a contributing member of the OpenAPI committee, but I don't know what that means, and I don't see any generic output from OpenAPI yet. I'm worried that Microsoft will build a black box (rather than white box) Swagger-compliant alternative for ASP.NET Core.
    • Other curious ones to pay attention to, but which I don't see as significantly supported by the .NET community yet (maybe I haven't looked hard enough), are:
  6. OData v4 has potential but is implementation-heavy and sorely needs a v5. 
    image
    A lot of investments have been made in OData v4 as a web-based facade to Entity Framework data resources. It's the foundation of everything the team I'm with is working on, and I've learned to hate it. LOL. But I see its potential. I hope investments continue because it is sorely missing fundamental features like
    • MS OData needs better navigation property filtering and security checking, whether by optionally redirecting navigation properties to EDM-mapped controller routes (yes, taking a performance hit) or some other means
    • MS OData '/$count' breaks when [ODataRoute] is declared, boo.
    • OData spec sorely needs "DISTINCT" feature
    • $select needs to be smarter about returning anonymous models and not just eliminating fields; if all you want is one field in a nested navigation property in a nested navigation property (equivalent of LINQ's .Select(x=>new {ID=x.ID, DesiredField2=x.Child.Child2.DesiredField2}), in the OData result set you will have to dive into an array and then into an array to find the one desired field
    • MS OData output serialization is very slow and CPU-heavy
    • Custom actions and functions and making them exposed to Swagger via Swashbuckle.OData make me want to pull my hair out, it takes sometimes two hours of screaming and choking people to set up a route in OData where it would take me two minutes in Web API, and in the end I end up with a weird namespaced function name in the route like /OData/Widgets/Acme.GetCompositeThingmajig(4), there's no getting away from even the default namespace and your EDM definition must be an EXACT match to what is clearly obviously spelled out in the C# controller implementation or you die. I mean, if Swashbuckle / Swashbuckle.OData can mostly figure most of it out without making us dress up in a weird Halloween costume, surely Microsoft's EDM generator should have been able to.
  7. "Simple CRUD apps" vs "messaging-oriented DDD apps"
    has become the new Microsoft vs Linux or C# vs Java or SQL vs NoSQL. 

    imageThe war is really ugly. Over the last two or three years people have really been talking about how microservices and reaction-oriented software have turned the software industry upside down. Those who hop on the bandwagon are neglecting to know when to choose simpler tooling chains for simple jobs, meanwhile those who refuse to jump on the bandwagon are using some really harsh, cruel words to describe the trend ("idiots", "morons", etc). We need to learn to love and embrace all of these forms of software, allow them to grow us up, and know when to choose which pattern for which job.
    • Simple CRUD apps can still accomplish most business needs, making them preferable most of the time
      • .. but they don't scale well
      • .. and they require relatively very little development knowledge to build and grow
    • Non-transactional message-oriented solutions and related patterns like CQRS-ES scale out well but scale developers' and testers' comprehension very poorly; they have an exponential scale of complexity footprint, but for the thrill seekers they can be, frankly, hella fun and interesting so long as they are not built upon ancient ESB systems like SAP and so long as people can integrate in software planning war rooms.
    • Disparate data sourcing as with DDD with partial data replication is a DBA's nightmare. DBAs will always hate it, their opinions will always be biased, and they will always be right in their minds that it is wrong and foolish to go that route. They will sometimes be completely correct.

  8. Integrated functional unit tests are more valuable than TDD-style purist unit tests. That’s my new conclusion about developer testing in 2016. Purist TDD mindset still has a role in the software developer’s life. But there is still value in automated integration tests, and when things like Entity Framework are heavily in play, apparently it’s better to build upon LocalDB automation than Moq.
    At least, that’s what my current employer has forced me to believe. Sadly, the purist TDD mindset that I tried to adopt and bring to the table was not even slightly appreciated. I don’t know if I’m going to burn in hell for being persuaded out of a purist unit testing mindset or not. We shall see, we shall see.
  9. I'm hearing some weird and creepy rumors I don't think I like about SQL Server moving to Linux and eventually getting itself renamed. I don't like it, I think it's unnecessary. Microsoft should just create another product. Let SQL Server be SQL Server for Windows forever. Careers are built on such things. Bad Microsoft! Windows 8, .NET Framework version name fiascos, Lync vs Skype for Business, when will you ever learn to stop breaking marketing details to fix what is already successful??!
  10. Speaking of SQL Server, SQL Server 2016 is RTM'd, and full blown SSMS 2016 is free.
  11. On-premises TFS 2015 only just recently acquired gated check-in build support in a recent update. Seriously, like, what the heck, Microsoft? It's also super buggy, you get a nasty error message in Visual Studio while monitoring its progress. This is laughable.
    • Clear message from Microsoft: "If you want a premium TFS experience, Azure / Visual Studio Online is where you have to go." Microsoft is no longer a shrink-wrapped product company, they sell shrink wrapped software only for the legacy folks as an afterthought. They are hosted platform company now all the way. .
      • This means that Windows 10 machines including Nokia devices are moving to be subscription boxes with dumb client deployments. Boo.
  12. imageAnother rumor I've heard is that
    Microsoft is going to abandon the game industry.

    The Xbox platform was awesome because Microsoft was all in. But they're not all in anymore, and it shows, and so now as they look at their lackluster profits, what did they expect?
    • Microsoft: Either stay all-in with Xbox and also Windows 10 (dudes, have you seen Steam's Big Picture mode? no excuse!) or say goodbye to the consumer market forever. Seriously. Because we who thrive on the Microsoft platform are also gamers. I would recommend knocking yourselves over to partner with Valve to co-own the whole entertainment world like the oligarchies that both of you are since Valve did so well at keeping the Windows PC relevant to the gaming markets.

For the most part I've probably lived under a rock, I'm sure, I've been too busy enjoying my new 2016 Subaru WRX (a 4-door racecar) which I am probably going to sell in the next year because I didn't get out of debt first, but not before getting a Kawasaki Vulcan S ABS Café as my first motorized two-wheeler, riding that between playing Steam games, going camping, and exploring other ways to appreciate being alive on this planet. Maybe someday I'll learn to help the homeless and unfed, as I should. BTW, in the end I happen to know that "love God and love people" are the only two things that matter in life. The rest is fluff. But I'm so selfish, man do I enjoy fluff.  I feel like such a jerk. Those who know me know that I am one. God help me.

image2016-Kawasaki-Vulcan-S-ABS-Cafe1 
image  image
Top row: Fluff that doesn’t matter and distracts me from matters of substance.
Bottom row: Matters of substance.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

ASP.NET | Blog | Career | Computers and Internet | Cool Tools | LINQ | Microsoft Windows | Open Source | Software Development | Web Development | Windows

Announcing Fast Koala, an alternative to Slow Cheetah

by Jon Davis 17. July 2015 19:47

So this is a quick FYI for teh blogrollz that I have recently been working on a little Visual Studio extension that will do for web apps what Slow Cheetah refused to do. It enables build-time transformations for both web apps and for Windows apps and classlibs. 

Here's the extension: https://visualstudiogallery.msdn.microsoft.com/7bc82ddf-e51b-4bb4-942f-d76526a922a0  

Here's the Github: https://github.com/stimpy77/FastKoala

Either link will explain more.

Introducing XIO (xio.js)

by Jon Davis 3. September 2013 02:36

I spent the latter portion last week and the bulk of the holiday fleshing out the initial prototype of XIO ("ecks-eye-oh" or "zee-oh", I don't care at this point). It was intended to start out as an I/O library targeting everything (get it? X I/O, as in I/O for x), but that in turn forced me to make it a repository library with RESTful semantics. I still want to add stream-oriented functionality (WebSocket / long polling) to it to make it truly an I/O library. In the mean time, I hope people can find it useful as a consolidated interface library for storing and retrieving data.

You can access this project here: https://github.com/stimpy77/xio.js#readme

Here's a snapshot of the README file as it was at the time of this blog entry.



XIO (xio.js)

version 0.1.1 initial prototype (all 36-or-so tests pass)

A consistent data repository strategy for local and remote resources.

What it does

xio.js is a Javascript resource that supports reading and writing data to/from local data stores and remote servers using a consistent interface convention. One can write code that can be more easily migrated between storage locations and/or URIs, and repository operations are simplified into a simple set of verbs.

To write and read to and from local storage,

xio.set.local("mykey", "myvalue");
var value = xio.get.local("mykey")();

To write and read to and from a session cookie,

xio.set.cookie("mykey", "myvalue");
var value = xio.get.cookie("mykey")();

To write and read to and from a web service (as optionally synchronous; see below),

xio.post.mywebservice("mykey", "myvalue");
var value = xio.get.mywebservice("mykey")();

See the pattern? It supports localStorage, sessionStorage, cookies, and RESTful AJAX calls, using the same interface and conventions.

It also supports generating XHR functions and providing implementations that look like:

mywebservice.post("mykey", "myvalue");
var value = mywebservice.get("mykey")(); // assumes synchronous; see below
Optionally synchronous (asynchronous by default)

Whether you're working with localStorage or an XHR resource, each operation returns a promise.

When the action is synchronous, such as in working with localStorage, it returns a "synchronous promise" which is essentially a function that can optionally be immediately invoked and it will wrap .success(value) and return the value. This also works with XHR when async: false is passed in with the options during setup (define(..)).

The examples below are the same, only because XIO knows that the localStorage implementation of get is synchronous.

Aynchronous convention: var val; xio.get.local('mykey').success(function(v) { val = v; });

Synchronous convention: var val = xio.get.local('mykey')();

Generated operation interfaces

Whenever a new repository is defined using XIO, a set of supported verb and their implemented functions is returned and can be used as a repository object. For example:

var myRepository = xio.define('myRepository', { 
    url: '/myRepository?key={0}',
    methods: ["GET", "POST", "PUT", "DELETE"]
});

.. would populate the variable myRepository with:

{
    get: function(key) { /* .. */ },
    post: function(key, value) { /* .. */ },
    put: function(key, value) { /* .. */ },
    delete: function(key) { /* .. */ }
}

.. and each of these would return a promise.

XIO's alternative convention

But the built-in convention is a bit unique using xio[action][repository](key, value) (i.e.xio.post.myRepository("mykey", {first: "Bob", last: "Bison"}), which, again, returns a promise.

This syntactical convention, with the verb preceding the repository, is different from the usual convention of_object.method(key, value).

Why?!

The primary reason was to be able to isolate the repository from the operation, so that one could theoretically swap out one repository for another with minimal or no changes to CRUD code. For example,

var repository = "local"; // use localStorage for now; 
                          // replace with "my_restful_service" when ready 
                          // to integrate with the server
xio.post[repository](key, value).complete(function() {

    xio.get[repository](key).success(function(val) {
        console.log(val);
    });

});

Note here how "repository" is something that can move around. The goal, therefore, is to make disparate repositories such as localStorage and RESTful web service targets support the same features using the same interface.

As a bit of an experiment, this convention of xio[verb][repository] also seems to read and write a little better, even if it's a bit weird at first to see. The thinking is similar to the verb-target convention in PowerShell. Rather than taking a repository and working with it independently with assertions that it will have some CRUD operations available, the perspective is flipped and you are focusing on what you need to do, the verbs, first, while the target becomes more like a parameter or a known implementation of that operation. The goal is to dumb down CRUD operation concepts and repositories and refocus on the operations themselves so that, rather than repositories having an unknown set of operations with unknown interface styles and other features, instead, your standard CRUD operations, which are predictable, have a set of valid repository targets that support those operations.

This approach would have been entirely unnecessary and pointless if Javascript inherently supported interfaces, because then we could just define a CRUD interface and write all our repositories against those CRUD operations. But it doesn't, and indeed with the convention of closures and modules, it really can't.

Meanwhile, when you define a repository with xio.define(), as was described above and detailed again below, it returns an object that contains the operations (get(), post(), etc) that it supports. So if you really want to use the conventional repository[method](key, value) approach, you still can!

Download

Download here: https://raw.github.com/stimpy77/xio.js/master/src/xio.js

To use the whole package (by cloning this repository)

.. and to run the Jasmine tests, you will need Visual Studio 2012 and a registration of the .json file type with IIS / IIS Express MIME types. Open the xio.js.csproj file.

Dependencies

jQuery is required for now, for XHR-based operations, so it's not quite ready for node.js. This dependency requirement might be dropped in the future.

Basic verbs

See xio.verbs:

  • get(key)
  • set(key, value); used only by localStorage, sessionStorage, and cookie
  • put(key, data); defaults to "set" behavior when using localStorage, sessionStorage, or cookie
  • post(key, data); defaults to "set" behavior when using localStorage, sessionStorage, or cookie
  • delete(key)
  • patch(key, patchdata); implemented based on JSON/Javascript literals field sets (send only deltas)
Examples
// initialize

var xio = Xio(); // initialize a module instance named "xio"
localStorage
xio.set.local("my_key", "my_value");
var val = xio.get.local("my_key")();
xio.delete.local("my_key");

// or, get using asynchronous conventions, ..    
var val;
xio.get.local("my_key").success(function(v) 
    val = v;
});

xio.set.local("my_key", {
    first: "Bob",
    last: "Jones"
}).complete(function() {
    xio.patch.local("my_key", {
        last: "Jonas" // keep first name
    });
});
sessionStorage
xio.set.session("my_key", "my_value");
var val = xio.get.session("my_key")();
xio.delete.session("my_key");
cookie
xio.set.cookie(...)

.. supports these arguments: (key, value, expires, path, domain)

Alternatively, retaining only the xio.set["cookie"](key, value), you can automatically returned helper replacer functions:

xio.set["cookie"](skey, svalue)
    .expires(Date.now() + 30 * 24 * 60 * 60000))
    .path("/")
    .domain("mysite.com");

Note that using this approach, while more expressive and potentially more convertible to other CRUD targets, also results in each helper function deleting the previous value to set the value with the new adjustment.

session cookie
xio.set.cookie("my_key", "my_value");
var val = xio.get.cookie("my_key")();
xio.delete.cookie("my_key");
persistent cookie
xio.set.cookie("my_key", "my_value", new Date(Date.now() + 30 * 24 * 60 * 60000));
var val = xio.get.cookie("my_key")();
xio.delete.cookie("my_key");
web server resource (basics)
var define_result =
    xio.define("basic_sample", {
                url: "my/url/{0}/{1}",
                methods: [ xio.verbs.get, xio.verbs.post, xio.verbs.put, xio.verbs.delete ],
                dataType: 'json',
                async: false
            });
var promise = xio.get.basic_sample([4,12]).success(function(result) {
   // ..
});
// alternatively ..
var promise_ = define_result.get([4,12]).success(function(result) {
   // ..
});

The define() function creates a verb handler or route.

The url property is an expression that is formatted with the key parameter of any XHR-based CRUD operation. The key parameter can be a string (or number) or an array of strings (or numbers, which are convertible to strings). This value will be applied to the url property using the same convention as the typical string formatters in other languages such as C#'s string.Format().

Where the methods property is defined as an array of "GET", "POST", etc, for each one mapping to standard XIO verbs an XHR route will be internally created on behalf of the rest of the options defined in the options object that is passed in as a parameter to define(). The return value of define() is an object that lists all of the various operations that were wrapped for XIO (i.e. get(), post(), etc).

The rest of the options are used, for now, as a jQuery's $.ajax(..., options) parameter. The async property defaults to false. When async is true, the returned promise is wrapped with a "synchronous promise", which you can optionally immediately invoke with parens (()) which will return the value that is normally passed into .success(function (value) { .. }.

In the above example, define_result is an object that looks like this:

{
    get: function(key) { /* .. */ },
    post: function(key, value) { /* .. */ },
    put: function(key, value) { /* .. */ },
    delete: function(key) { /* .. */ }
}

In fact,

define_result.get === xio.get.basic_sample

.. should evaluate to true.

Sample 2:

var ops = xio.define("basic_sample2", {
                get: function(key) { return "value"; },
                post: function(key,value) { return "ok"; }
            });
var promise = xio.get["basic_sample2"]("mykey").success(function(result) {
   // ..
});

In this example, the get() and post() operations are explicitly declared into the defined verb handler and wrapped with a promise, rather than internally wrapped into XHR/AJAX calls. If an explicit definition returns a promise (i.e. an object with .success and .complete), the returned promise will not be wrapped. You can mix-and-match both generated XHR calls (with the url and methods properties) as well as custom implementations (with explicit get/post/etc properties) in the options argument. Custom implementations will override any generated implementations if they conflict.

web server resource (asynchronous GET)
xio.define("specresource", {
                url: "spec/res/{0}",
                methods: [xio.verbs.get],
                dataType: 'json'
            });
var val;
xio.get.specresource("myResourceAction").success(function(v) { // gets http://host_server/spec/res/myResourceAction
    val = v;
}).complete(function() {
    // continue processing with populated val
});
web server resource (synchronous GET)
xio.define("synchronous_specresources", {
                url: "spec/res/{0}",
                methods: [xio.verbs.get],
                dataType: 'json',
                async: false // <<==!!!!!
            });
var val = xio.get.synchronous_specresources("myResourceAction")(); // gets http://host_server/spec/res/myResourceAction
web server resource POST
xio.define("contactsvc", {
                url: "svcapi/contact/{0}",
                methods: [ xio.verbs.get, xio.verbs.post ],
                dataType: 'json'
            });
var myModel = {
    first: "Fred",
    last: "Flinstone"
}
var val = xio.post.contactsvc(null, myModel).success(function(id) { // posts to http://host_server/svcapi/contact/
    // model has been posted, new ID returned
    // validate:
    xio.get.contactsvc(id).success(function(contact) {  // gets from http://host_server/svcapi/contact/{id}
        expect(contact.first).toBe("Fred");
    });
});
web server resource (DELETE)
xio.delete.myresourceContainer("myresource");
web server resource (PUT)
xio.define("contactsvc", {
                url: "svcapi/contact/{0}",
                methods: [ xio.verbs.get, xio.verbs.post, xio.verbs.put ],
                dataType: 'json'
            });
var myModel = {
    first: "Fred",
    last: "Flinstone"
}
var val = xio.post.contactsvc(null, myModel).success(function(id) { // posts to http://host_server/svcapi/contact/
    // model has been posted, new ID returned
    // now modify:
    myModel = {
        first: "Carl",
        last: "Zeuss"
    }
    xio.put.contactsvc(id, myModel).success(function() {  /* .. */ }).error(function() { /* .. */ });
});
web server resource (PATCH)
xio.define("contactsvc", {
                url: "svcapi/contact/{0}",
                methods: [ xio.verbs.get, xio.verbs.post, xio.verbs.patch ],
                dataType: 'json'
            });
var myModel = {
    first: "Fred",
    last: "Flinstone"
}
var val = xio.post.contactsvc(null, myModel).success(function(id) { // posts to http://host_server/svcapi/contact/
    // model has been posted, new ID returned
    // now modify:
    var myModification = {
        first: "Phil" // leave the last name intact
    }
    xio.patch.contactsvc(id, myModification).success(function() {  /* .. */ }).error(function() { /* .. */ });
});
custom implementation and redefinition
xio.define("custom1", {
    get: function(key) { return "teh value for " + key};
});
xio.get.custom1("tehkey").success(function(v) { alert(v); } ); // alerts "teh value for tehkey";
xio.redefine("custom1", xio.verbs.get, function(key) { return "teh better value for " + key; });
xio.get.custom1("tehkey").success(function(v) { alert(v); } ); // alerts "teh better value for tehkey"
var custom1 = 
    xio.redefine("custom1", {
        url: "customurl/{0}",
        methods: [xio.verbs.post],
        get: function(key) { return "custom getter still"; }
    });
xio.post.custom1("tehkey", "val"); // asynchronously posts to URL http://host_server/customurl/tehkey
xio.get.custom1("tehkey").success(function(v) { alert(v); } ); // alerts "custom getter still"

// oh by the way,
for (var p in custom1) {
    if (custom1.hasOwnProperty(p) && typeof(custom1[p]) == "function") {
        console.log("custom1." + p); // should emit custom1.get and custom1.post
    }
}

Future intentions

WebSockets and WebRTC support

The original motivation to produce an I/O library was actually to implement a WebSockets client that can fallback to long polling, and that has no dependency upon jQuery. Instead, what has so far become implemented has been a standard AJAX interface that depends upon jQuery. Go figure.

If and when WebSocket support gets added, the next step will be WebRTC.

Meanwhile, jQuery needs to be replaced with something that works fine on nodejs.

Additionally, in a completely isolated parallel path, if no progress is made by the ASP.NET SignalR team to make the SignalR client freed from jQuery, xio.js might become tailored to be a somewhat code compatible client implementation or a support library for a separate SignalR client implementation.

Service Bus, Queuing, and background tasks support

At an extremely lightweight scale, I do want to implement some service bus and queue features. For remote service integration, this would just be more verbs to sit on top of the existing CRUD operations, as well as WebSockets / long polling / SignalR integration. This is all fairly vague right now because I am not sure yet what it will look like. On a local level, however, I am considering integrating with Web Workers. It might be nice to use XIO to manage deferred I/O via the Web Workers feature. There are major limitations to Web Workers, however, such as no access to the DOM, so I am not sure yet.

Other notes

If you run the Jasmine tests, make sure the .json file type is set up as a mime type. For example, IIS and IIS Express will return a 403 otherwise. Google reveals this: http://michaellhayden.blogspot.com/2012/07/add-json-mime-type-to-iis-express.html

License

The license for XIO is pending, as it's not as important to me as getting some initial feedback. It will definitely be an attribution-based license. If you use xio.js as-is, unchanged, with the comments at top, you definitely may use it for any project. I will drop in a license (probably Apache 2 or BSD or Creative Commons Attribution or somesuch) in the near future.

A Consistent Approach To Client-Side Cache Invalidation

by Jon Davis 10. August 2013 17:40

Download the source code for this blog entry here: ClientSideCacheInvalidation.zip

TL;DR?

Please scroll down to the bottom of this article to review the summary.

I ran into a problem not long ago where some JSON results from an AJAX call to an ASP.NET MVC JsonResult action were being cached by the browser, quite intentionally by design, but were no longer up-to-date, and without devising a new approach to route manipulation or any of the other fundamental infrastructural designs for the endpoints (because there were too many) our hands were tied. The caching was being done using the ASP.NET OutputCacheAttribute on the action being invoked in the AJAX call, something like this (not really, but this briefly demonstrates caching):

[OutputCache(Duration = 300)]
public JsonResult GetData()
{
return Json(new
{
LastModified = DateTime.Now.ToString()
}, JsonRequestBehavior.AllowGet);
}
@model dynamic
@{
ViewBag.Title = "Home";
}
<h2>Home</h2>
<div id="results"></div>
<div><button id="reload">Reload</button></div>
@section scripts {
<script>
var $APPROOT = "@Url.Content("~/")";
$.getJSON($APPROOT + "Home/GetData", function (o) {
$('#results').text("Last modified: " + o.LastModified);
});
$('#reload').on('click', function() {
window.location.reload();
});
</script>
}

Since we were using a generalized approach to output caching (as we should), I knew that any solution to this problem should also be generalized. My first thought was in the mistaken assumption that the default [OutputCache] behavior was to rely on client-side caching, since client-side caching was what I was observing while using Fiddler. (Mind you, in the above sample this is not the case, it is actually server-side, but this is probably because of the amount of data being transferred. I’ll explain after I explain what I did in my false assumption.)

Microsoft’s default convention for implementing cache invalidation is to rely on “VaryBy..” semantics, such as varying the route parameters. That is great except that the route and parameters were currently not changing in our implementation.

So, my initial proposal was to force the caching to be done on the server instead of on the client, and to invalidate when appropriate.

 

public JsonResult DoSomething()
{
// 
// Do something here that has a side-effect
// of making the cached data stale
// 
Response.RemoveOutputCacheItem(Url.Action("GetData"));
return Json("OK");
}
[OutputCache(Duration = 300, Location = OutputCacheLocation.Server)]
public JsonResult GetData()
{
return Json(new
{
LastModified = DateTime.Now.ToString()
}, JsonRequestBehavior.AllowGet);
}

 

 

<button id="invalidate">Invalidate</button></div>

 

 

$('#invalidate').on('click', function() {
$.post($APPROOT + "Home/DoSomething", null, function(o) {
window.location.reload();
}, 'json');
});

 

image
While Reload has no effect on the Last modified value, the
Invalidate button causes the date to increment.

When testing, this actually worked quite well. But concerns were raised about the payload of memory on the server. Personally I think the memory payload in practically any server-side caching is negligible, certainly if it is small enough that it would be transmitted over the wire to a client, so long as it is measured in kilobytes or tens of kilobytes and not megabytes. I think the real concern is that transmission; the point of caching is to make the user experience as smooth and seamless as possible with minimal waiting, so if the user is waiting for a (cached) payload, while it may be much faster than the time taken to recalculate or re-acquire the data, it is still measurably slower than relying on browser cache.

The default implementation of OutputCacheAttribute is actually OutputCacheLocation.Any. This indicates that the cached item can be cached on the client, on a proxy server, or on the web server. From my tests, for tiny payloads, the behavior seemed to be caching on the server and no caching on the client; for a large payload from GET requests with querystring parameters seemed to be caching on the client but with an HTTP query with an “If-Modified-Since” header, resulting in a 304 Not Modified on the server (indicating it was also cached on the server but verified by the server that the client’s cache remains valid); and for a large payload from GET requests with all parameters in the path, the behavior seemed to be caching on the client without any validation checking from the client (no HTTP request for an If-Modified-Since check). Now, to be quite honest I am only guessing that these were the distinguishing factors of these behavior observations. Honestly, I saw variations of these behaviors happening all over the place as I tinkered with scenarios, and this was the initial pattern I felt I was observing.

At any rate, for our purposes we were currently stuck with relying on “Any” as the location, which in theory would remove server-side caching if the server ran short on RAM (in theory, I don’t know, although the truth can probably be researched, which I don’t have time to get into). The point of all this is, we have client-side caching that we cannot get away from.

So, how do you invalidate the client-side cache? Technically, you really can’t. The browser controls the cache bucket and no browsers provide hooks into the cache to invalidate them. But we can get smart about this, and work around the problem, by bypassing the cached data. Cached HTTP results are stored on the basis of varying by the full raw URL on HTTP GET methods, they are cached with an expiration (in the above sample’s case, 300 seconds, or 5 minutes), and are only cached if allowed to be cached in the first place as per the HTTP header directives in the HTTP response. So, to bypass the cache you don’t cache, or you need to know up front how long the cache should remain until it expires—neither of these being acceptable in a dynamic application—or you need to use POST instead of GET, or you need to vary up the URL.

Microsoft originally got around the caching problem in ASP.NET 1.x by forcing the “normal” development cycle in the lifecycle of <form> tags that always used the POST method over HTTP. Responses from POST requests are never cached. But POSTing is not clean as it does not follow the semantics of the verbiage if nothing is being sent up and data is only being retrieved.

You can also use ETag in the HTTP headers, which isn’t particularly helpful in a dynamic application as it is no different from a URL + expiration policy.

To summarize, to control cache:

  • Disable caching from the server in the Response header (Pragma: no-cache)
  • Predict the lifetime of the content and use an expiration policy
  • Use POST not GET
  • Etag
  • Vary the URL (case-sensitive)

Given our options, we need to vary up the URL. There a number of approaches to this, but almost all of the approaches involve relying on appending or modifying the querystring with parameters that are expected to be ignored by the server.

$.getJSON($APPROOT + "Home/GetData?_="+Date.now(), function (o) {
$('#results').text("Last modified: " + o.LastModified);
});

In this sample, the URL is appended with “?_=”+Date.now(), resulting in this URL in the GET:

/Home/GetData?_=1376170287015

This technique is often referred to as cache-busting. (And if you’re reading this blog article, you’re probably rolling your eyes. “Duh.”) jQuery inherently supports cache-busting, but it does not do it on its own from $.getJSON(), it only does it in $.ajax() when the options parameter includes {cache: false}, unless you invoke $.ajaxSetup({ cache: false }); first to disable all caching. Otherwise, for $.getJSON() you would have to do it manually by appending the URL. (Alright, you can stop rolling your eyes at me now, I’m just trying to be thorough here..)

This is not our complete solution. We have a couple problems we still have to solve.

First of all, in a complex client codebase, hacking at the URL from application logic might not be the most appropriate approach. Consider if you’re using Backbone.js with routes that synchronize objects to and from the server. It would be inappropriate to modify the routes themselves just for cache invalidation. A more generalized cache invalidation technique needs to be implemented in the XHR-invoking AJAX function itself. The approach in doing this will depend upon your Javascript libraries you are using, but, for example, if jQuery.getJSON() is being used in application code, then jQuery.getJSON itself could perhaps be replaced with an invalidation routine.

var gj = $.getJSON;
$.getJSON = function (url, data, callback) {
url = invalidateCacheIfAppropriate(url); // todo: implement something like this
return gj.call(this, url, data, callback);
};

This is unconventional and probably a bad example since you’re hacking at a third party library, a better approach might be to wrap the invocation of $.getJSON() with an application function.

var getJSONWrapper = function (url, data, callback) {
url = invalidateCacheIfAppropriate(url); // todo: implement something like this
return $.getJSON(url, data, callback);
};

And from this point on, instead of invoking $.getJSON() in application code, you would invoke getJSONWrapper, in this example.

The second problem we still need to solve is that the invalidation of cached data that derived from the server needs to be triggered by the server because it is the server, not the client, that knows that client cached data is no longer up-to-date. Depending on the application, the client logic might just know by keeping track of what server endpoints it is touching, but it might not! Besides, a server endpoint might have conditional invalidation triggers; the data might be stale given specific conditions that only the server may know and perhaps only upon some calculation. In other words, invalidation needs to be pushed by the server.

One brute force, burdensome, and perhaps a little crazy approach to this might be to use actual “push technology”, formerly “Comet” or “long-polling”, now WebSockets, implemented perhaps with ASP.NET SignalR, where a connection is maintained between the client and the server and the server then has this open socket that can push invalidation flags to the client.

We had no need for that level of integration and you probably don’t either, I just wanted to mention it because it might come back as food for thought for a related solution. One scenario I suppose where this might be useful is if another user of the web application has caused the invalidation, in which case the current user will not be in the request/response cycle to acquire the invalidation flag. Otherwise, it is perhaps a reasonable assumption that invalidation is only needed, and only triggered, in the context of a user’s own session. If not, perhaps it is a “good enough” assumption even if it is sometimes not true. The expiration policy can be set low enough that a reasonable compromise can be made between the current user’s changes and changes invoked by other systems or other users.

While we may not know what server endpoint might introduce the invalidation of client cache data, we could assume that the invalidation will be triggered by any server endpoint(s), and build invalidation trigger logic on the response of server HTTP responses.

To begin implementing some sort of invalidation trigger on the server I could flag invalidations to the client using HTTP header(s).

public JsonResult DoSomething()
{
//
// Do something here that has a side-effect
// of making the cached data stale
//
InvalidateCacheItem(Url.Action("GetData"));
return Json("OK");
}
public void InvalidateCacheItem(string url)
{
Response.RemoveOutputCacheItem(url); // invalidate on server
Response.AddHeader("X-Invalidate-Cache-Item", url); // invalidate on client
}
[OutputCache(Duration = 300)]
public JsonResult GetData()
{
return Json(new
{
LastModified = DateTime.Now.ToString()
}, JsonRequestBehavior.AllowGet);
}

At this point, the server is emitting a trigger to the HTTP client that says that “as a result of a recent operation, that other URL, the one for GetData, is no longer valid for your current cache, if you have one”. The header alone can be handled by different client implementations (or proxies) in different ways. I didn’t come across any “standard” HTTP response that does this “officially”, so I’ll come up with a convention here.

image

Now we need to handle this on the client.

Before I do anything first of all I need to refactor the existing AJAX functionality on the client so that instead of using $.getJSON, I might use $.ajax or some other flexible XHR handler, and wrap it all in custom functions such as httpGET()/httpPOST() and handleResponse().

var httpGET = function(url, data, callback) {
return httpAction(url, data, callback, "GET");
};
var httpPOST = function (url, data, callback) {
return httpAction(url, data, callback, "POST");
};
var httpAction = function(url, data, callback, method) {
url = cachebust(url);
if (typeof(data) === "function") {
callback = data;
data = null;
}
$.ajax(url, {
data: data,
type: "GET",
success: function(responsedata, status, xhr) {
handleResponse(responsedata, status, xhr, callback);
}
});
};
var handleResponse = function (data, status, xhr, callback) {
handleInvalidationFlags(xhr);
callback.call(this, data, status, xhr);
};
function handleInvalidationFlags(xhr) {
// not yet implemented
};
function cachebust(url) {
// not yet implemented
return url;
};
// application logic
httpGET($APPROOT + "Home/GetData", function(o) {
$('#results').text("Last modified: " + o.LastModified);
});
$('#reload').on('click', function() {
window.location.reload();
});
$('#invalidate').on('click', function() {
httpPOST($APPROOT + "Home/Invalidate", function (o) {
window.location.reload();
});
});

At this point we’re not doing anything yet, we’ve just broken up the HTTP/XHR functionality into wrapper functions that we can now modify to manipulate the request and to deal with the invalidation flag in the response. Now all our work will be in handleInvalidationFlags() for capturing that new header we just emitted from the server, and cachebust() for hijacking the URLs of future requests.

To deal with the invalidation flag in the response, we need to detect that the header is there, and add the cached item to a cached data set that can be stored locally in the browser with web storage. The best place to put this cached data set is in sessionStorage, which is supported by all current browsers. Putting it in a session cookie (a cookie with no expiration flag) works but is less ideal because it adds to the payload of all HTTP requests. Putting it in localStorage is less ideal because we do want the invalidation flag(s) to go away when the browser session ends, because that’s when the original browser cache will expire anyway. There is one caveat to sessionStorage: if a user opens a new tab or window, the browser will drop the sessionStorage in that new tab or window, but may reuse the browser cache. The only workaround I know of at the moment is to use localStorage (permanently retaining the invalidation flags) or a session cookie. In our case, we used a session cookie.

Note also that IIS is case-insensitive on URI paths, but HTTP itself is not, and therefore browser caches will not be. We will need to ignore case when matching URLs with cache invalidation flags.

Here is a more or less complete client-side implementation that seems to work in my initial test for this blog entry.

function handleInvalidationFlags(xhr) {
// capture HTTP header
var invalidatedItemsHeader = xhr.getResponseHeader("X-Invalidate-Cache-Item");
if (!invalidatedItemsHeader) return;
invalidatedItemsHeader = invalidatedItemsHeader.split(';');
// get invalidation flags from session storage
var invalidatedItems = sessionStorage.getItem("invalidated-cache-items");
invalidatedItems = invalidatedItems ? JSON.parse(invalidatedItems) : {};
// update invalidation flags data set
for (var i in invalidatedItemsHeader) {
invalidatedItems[prepurl(invalidatedItemsHeader[i])] = Date.now();
}
// store revised invalidation flags data set back into session storage
sessionStorage.setItem("invalidated-cache-items", JSON.stringify(invalidatedItems));
}
// since we're using IIS/ASP.NET which ignores case on the path, we need a function to force lower-case on the path
function prepurl(u) {
return u.split('?')[0].toLowerCase() + (u.indexOf("?") > -1 ? "?" + u.split('?')[1] : "");
}
function cachebust(url) {
// get invalidation flags from session storage
var invalidatedItems = sessionStorage.getItem("invalidated-cache-items");
invalidatedItems = invalidatedItems ? JSON.parse(invalidatedItems) : {};
// if item match, return concatonated URL
var invalidated = invalidatedItems[prepurl(url)];
if (invalidated) {
return url + (url.indexOf("?") > -1 ? "&" : "?") + "_nocache=" + invalidated;
}
// no match; return unmodified
return url;
}

Note that the date/time value of when the invalidation occurred is permanently stored as the concatenation value. This allows the data to remain cached, just updated to that point in time. If invalidation occurs again, that concatenation value is revised to the new date/time.

Running this now, after invalidation is triggered by the server, the subsequent request of data is appended with a cache-buster querystring field.

image

 

In Summary, ..

.. a consistent approach to client-side cache invalidation triggered by the server might be by following these steps.

  1. Use X-Invalidate-Cache-Item as an HTTP response header to flag potentially cached URLs as expired. You might consider using a semicolon-delimited response to list multiple items. (Do not URI-encode the semicolon when using it as a URI list delimiter.) Semicolon is a reserved/invalid character in URI and is a valid delimiter in HTTP headers, so this is valid.
  2. Someday, browsers might support this HTTP response header by automatically invalidating browser cache items declared in this header, which would be awesome. In the mean time ...
  3. Capture these flags on the client into a data set, and store the data set into session storage in the format:
    		{
    	"http://url.com/route/action": (date_value_of_invalidation_flag),
    	"http://url.com/route/action/2": (date_value_of_invalidation_flag)
    	}
    	
  4. Hijack all XHR requests so that the URL is appropriately appended with cachebusting querystring parameter if the URL was found in the invalidation flags data set, i.e. http://url.com/route/action becomes something like http://url.com/route/action?_nocache=(date_value_of_invalidation_flag), being sure to hijack only the XHR request and not any logic that generated the URL in the first place.
  5. Remember that IIS and ASP.NET by default convention ignore case (“/Route/Action” == “/route/action”) on the path, but the HTTP specification does not and therefore the browser cache bucket will not ignore case. Force all URL checks for invalidation flags to be case-insensitive to the left of the querystring (if there is a querystring, otherwise for the entire URL).
  6. Make sure the AJAX requests’ querystring parameters are in consistent order. Changing the sequential order of parameters may be handled the same on the server but will be cached differently on the client.
  7. These steps are for “pull”-based XHR-driven invalidation flags being pulled from the server via XHR. For “push”-based invalidation triggered by the server, consider using something like a SignalR channel or hub to maintain an open channel of communication using WebSockets or long polling. Server application logic can then invoke this channel or hub to send an invalidation flag to the client or to all clients.
  8. On the client side, an invalidation flag “push” triggered in #7 above, for which #1 and #2 above would no longer apply, can still utilize #3 through #6.

You can download the project I used for this blog entry here: ClientSideCacheInvalidation.zip

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

ASP.NET | C# | Javascript | Techniques | Web Development

ASP.NET MVC 4: Where Have All The Global.asax Routes Gone?

by Jon Davis 23. June 2012 03:03

I ran into this a few days back and had been meaning to blog about it, so here it finally is while it’s still interesting information.

In ASP.NET MVC 1.0, 2.0, and 3.0, routes are defined in the Global.asax.cs file in a method called RegisterRoutes(..).

mvc3_register_routes

It had become an almost unconscious navigate-and-click routine for me to open Global.asax.cs up to diagnose routing errors and to introduce new routes. So upon starting a new ASP.NET MVC 4 application with Visual Studio 11 RC (or Visual Studio 2012 RC, whichever it will be called), it took me by surprise to find that the RegisterRoutes method is no longer defined there. In fact, the MvcApplication class defined Global.asax.cs contains only 8 lines of code! I panicked when I saw this. Where do I edit my routes?!

mvc4_globalasax

What kept me befuddled for far too long (quite a bit longer than a couple seconds, shame on me!) was the fact that these lines of code, when not actually read and only glanced at, look similar to the Application_Start() from the previous iteration of ASP.NET MVC:

mvc3_globalasax

Eventually I squinted and paid closer attention to the difference, and then I realized that the RegisterRoutes(..) method is being invoked still but it is managed in a separate configuration class. Is this class an application settings class? Is it a POCO class? A wrapper class for a web.config setting? Before I knew it I was already right-clicking on RegisterRoutes and choosing Go To Definition ..

mvc4_globalasax_gotodef

Under Tools –> Options –> Projects and Solutions –> General I have Track Active Item in Solution Explorer enabled, so upon right-clicking an object member reference in code and choosing “Go To Definition” I always glance over at Solution Explorer to see where it navigates to in the tree. This is where I immediately found the new config files:

mvc4_app_start_solex

.. in a new App_Start folder, which contains FilterConfig.cs, RouteConfig.cs, and BundleConfig.cs, as named by the invoking code in Global.asax.cs. And to answer my own question, these are POCO classes, each with a static method (i.e. RegisterRoutes).

I like this change. It’s a minor refactoring that cleans up code. I don’t understand the naming convention of App_Start, though. It seems like it should be called “Config” or something, or else Global.asax.cs should be moved into App_Start as well since Application_Start() lives in Global.asax.cs. But whatever. Maintaining configuration details in one big Global.asax.cs file gets to be a bit of a pain sometimes especially in growing projects so I’m very glad that such configuration details are now tucked away in their own dedicated spaces.

I am curious but have not yet checked to determine whether App_Start as a new ASP.NET folder has any inherent behaviors associated with it, such as for example post-edit auto-compilation. I’m doubtful.

In future blog post(s), perhaps my next post, I’ll go over some of the other changes in ASP.NET MVC 4.

Currently rated 3.8 by 27 people

  • Currently 3.814815/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

ASP.NET | Web Development

Microsoft: We’re Not Stupid

by Jon Davis 2. May 2010 23:40

We get it, Microsoft. You want us to use Azure. You want us to build highly scalable software that will run in the cloud—your cloud. And we’ll get the wonderful support from Microsoft if we choose Azure. Yes, Microsoft. We get it.

We’re not stupid. You play up Azure like it’s a development skill, or some critical piece of the Visual Studio development puzzle, but we recognize that Azure is a proprietary cloud service that you’re advertising, not an essential tool chain component. Now go away. Please. Stop diluting the MSDN Magazine articles and the msdev Facebook app status posts with your marketing drivel about Azure. You are not going to get any checks written out from me for hosted solutions. We know that you want to profit from this. Heck, we even believe it might actually be a half-decent hosting service. But, Microsoft, you didn’t invent the cloud, there are other clouds out there, so tooling for your operating system using Visual Studio does not mean that I need to know diddly squat about your tirelessly hyped service.

There are a lot of other things you can talk about and still make a buck off of your platform. You can talk about how cool WPF is as a great way to build innovative Windows-only products. You can focus on how fast SQL Server 2008 R2 is and how Oracle wasted their money on a joke (mySQL). You can play up the wonderful extensibility of IIS 7 and all the neat kinds of innovative networked software you can build with it. Honestly, I don’t even know what you should talk about because you’re the ones who know the info, not me.

But Microsoft it’s getting really boring to hear the constant hyping of Azure. I’ve already chosen how my stuff will be hosted, and that’s not going to change right now. So honestly, I really don’t care.

Maybe I need to explain why I don’t care.

Microsoft, there are only two groups of people who are going to choose your ridiculously wonderful and bloated cloud: established mid-market businesses with money to spend, and start-ups with a lot of throw-away capital who drank your kool aid. You shouldn’t worry about those people. The people you should worry about are those who will choose against it, and will have made their decision firmly.

First of all, I believe most enterprises will not want to put their data on a cloud, certainly not with a standardized set of cloud interfaces. It’s too great a security risk. Amazon’s true OS cloud is enticing because companies can roll their own APIs with proprietary APIs and have them talk to each other while rolling out VM instances on a whim. They have sufficient tooling outside of cloud-speak to write what they need and to do what needs doing. But for the most part, companies want to keep internal data internal.

Second, we geeks don’t fiddle a whole lot with accounting and taking corporate risks. We focus on writing code. That code has to be portable. It has to run locally, on a dedicated IIS server, or in a cloud. Writing code that deploys to your cloud—whether a true cloud or locally for testing, it doesn’t matter—if it doesn’t run equally well in other environments it’s at best a redundant effort and at worst a potentially wasted one. We have to write code for your cloud and then we have to write code for running without your cloud. We most certainly would not be comfortable writing code that only runs on your cloud, but the mangled way your cloud APIs are marketed we might as well bet the whole farm on it. And that just ain’t right.

See, I don’t like going into anything not knowing I can pull out and look at alternatives at any time without having completely wasted my efforts. If I’m going to write code for Azure, I want to be assured that the code will have the same functionality outside of Azure. But since Azure APIs only run in the Azure cloud, and Azure cannot be self-hosted (outside of localhost debugging), I don’t have that assurance. Hence, I as a geek and as an entrepreneur have no interest in Azure.

When I choose a tool chain, I choose it for its toolset, not for its target environment. I already know that Windows Server with IIS is adequate for the scale of runtimes I work with. When I choose a hosting service, I choose it expecting to be very low-demand but with potential for high demand. I don’t want to pay out the nose for that potential. I often experiment with different solutions and discover a site’s market potential. But I don’t go in expecting to make a big buck—I only go in hoping to.

What would gain my interest in Azure? Pretty much the only thing that would have me give Azure even a second glance would be a low-demand (low traffic, low CPU, low storage, and low memory) absolutely free account, whereby I am simply billed if I go over my limit. If free’s no good, then a flat ridiculously low rate, like $10/mo for reasonable usage and a reasonable rate when I go over. A trial is unacceptable. I’m not going to develop for something that is going to only be a trial. And I also prefer a reasonable flat rate for low-demand usage over a generated-per-use one. I prefer to have an up-front idea of how much things will cost. I don’t have time to keep adjusting my budget. I don’t want to have to get billed first in order to see what my monthly cost will be.

I’m actually paying quite a bit of money for my Windows 2008 VPS, but the nice thing about it is there are no surprises, the server will handle the load, and if I ever exceed its capacity I can just get another account. Whereas, cloud == surprises. You have to do a lot of manual number crunching in order to determine what your bill is going to look like. “Got a lot of traffic this month? We got you covered, we automatically scaled for you. Now here’s your massive bill!”

Let’s put it this way, Microsoft. If you keep pushing Azure at me, I can abandon your tool chain completely and stick with my other $12/mo Linux VM that would meet my needs for a low-demand server on which I still have the support of some magnificent open source communities, and if my needs grow I can always instance another $12/mo account. Honestly, the more diluted the developer discussions are with Azure hype, the more inclined I am to go down that path. (Although, I’ll admit it’ll take a lot more to get me to go all the way with that.)

Just stop, please. I have no problem with Azure, you can put your banner ads and printed ads into everything I touch, I’m totally fine with that. What is really upsetting to me is when magazine and related content, both online and printed, is taken up to hype your proprietary cloud services, and I really feel like I’m getting robbed as an MSDN subscriber.

Just keep in mind, we’re not stupid. We do know service marketing versus helpful development tips when we see it. You’re only hurting yourselves when you push the platform on us like we’re lemmings. Speaking for myself, I’m starting to dread what should have been a wonderful year of 2010 with the evolution of the Microsoft tool chain.

 

[UPDATE: According to this, Azure will someday be independently hostable. That's better. Now I might start paying attention.] 

Currently rated 3.7 by 3 people

  • Currently 3.666667/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

Peeves | Software Development | Web Development

Microsoft WebsiteSpark Now Negates The Cost For Web Devs and Designers

by Jon Davis 26. September 2009 18:52

A couple weeks ago I posted a blog entry ("Is The Microsoft Stack Really More Expensive?") describing the financial barrier to entry for building software--particularly web apps--on the Microsoft platform. The conclusion was that the cost is likely to be nil if you're a) willing to settle for the Express products and everything else bundled in the Microsoft Web Platform Installer (which includes a slew of open source ASP.NET and PHP web apps to start you off), b) starting a software company, c) a student, or d) an employee of a company willing to foot the bill for an MSDN license (to you personally, not to your team).

Well, Microsoft just created yet another program, for those of you who are e) building or designing web sites. (Sweeeeet!!) For a $100 offing fee (a fee that you pay when your license ends, rather than when it begins) you get Windows Web Server 2008 R2, SQL Server 2008 Web Edition, Visual Studio 2008 Professional Edition, and Expression Studio 3.

Not bad! Although, your license ends in three (3) years (same as BizSpark).

Link: Microsoft WebsiteSpark

CSS For Layouts: Meh, I'll Stick With Tables, Thanks

by Jon Davis 24. September 2009 11:09

I tend to agree with the sentiments posted here:

http://www.flownet.com/ron/css-rant.html [followups here and here].

I've wasted WAY too many aggregate hours trying to nudge DIVs into place using half-baked CSS semantics that work on all browsers without workarounds, where the simple use of a <TABLE> tag would have sufficed just fine. Even this blog's sidebar does not behave the way I wanted it to behave despite spending some time trying and failing to get it to behave according to my preferences because I chose CSS instead of <TABLE> and, frankly, CSS sucks for some things such as this. With <TABLE> I can say simply <TABLE WIDTH="100%"><TR><TD>..</TD><TD WIDTH="300"> and boom I have a perfect sidebar with fixed width and a fluid content body with ABSOLUTELY NO CSS TO HAVE TO MANGLE other than disabling the default HTML rendering behavior of borders and spacing.

Ron comments,

Another common thread is that "tables are for tabular data, not layout." Why? Just because they are called tables? Here's a news flash: HTML has no semantics beyond how it is rendered! (That's not quite true. Links have semantics beyond their renderings. And maybe label tags. But nothing else in HTML does.)

The only reservation I have in favor of DIVs instead of TABLEs is when it gets down to nesting. Deeply nested <TABLE>'s can get really, really ugly. I think this is where all the hatred of <TABLE>s comes from, which I agree with.

I've reached the conclusion that if I can use <DIV>'s effectively (and quickly) and the behavior is predictable, I'll use DIVs. But same with TABLEs. I have yet to hear a CSS purist describe a logical reason to use DIVs+CSS over TABLEs for overall page structure. It all seems to be cognative dissonance and personal bias.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , ,

Web Development

Is The Microsoft Stack Really More Expensive?

by Jon Davis 5. September 2009 23:17

As a Microsoft customer, who at times rambles on with a fair share of complaints about Microsoft’s doings, I want to take a moment to discuss Microsoft’s successes in making its development stack affordable, equal to or even more so I’d argue than the LAMP + Adobe stacks.

Let’s Get Started

If you’re developing for the web, Microsoft makes it easy to download everything you need to develop on the Microsoft stack for free with a do-it-all download application called the Microsoft Web Platform. Everything you need to get started is available from that tool for free, including (but not limited to):

  • Visual Web Developer 2008 Express (FREE)
  • Silverlight tools for Visual Web Developer (FREE)
  • Microsoft SQL Server 2008 Express (FREE)
  • IIS extensions such as FastCGI for running PHP applications (FREE)
  • ASP.NET add-on libraries, including ASP.NET MVC (FREE)
  • Tons of free, open source ASP.NET applications (FREE)
  • Tons of free, open source PHP applications that can run on IIS (on Windows) (FREE)

I’ll even go so far as to repost a pretty Microsoft-provided button.

(FREE)

Windows

Let’s get the obvious realities of Microsoft stack expenses out of the way first. Microsoft is a platforms company. They make their money off of our dependence upon their platform. That platform is Windows. Many people’s reaction to this is to hold up two fingers to make a cross and shout, “Eww, nooo! No! Monopolies, baaad!” I believe I have a more well-rounded response, which is, “Oh! Well dang. If we’re going to build up a dependency upon a platform, that platform (and its sub-platforms) had better be REALLY FREAKING GOOD—good as in performant, easy to work with, reliable, scalable, and a joy to use, and it had better support all the things most all the other platforms support.”

Enter Windows Server 2008 and Windows 7.

Over the last decade, Microsoft has worked hard to achieve, and since Windows Vista (believe it or not) has already achieved, the right to sing the song to Linux,

Anything you can do,
I can do better!
I can do anything
better than you!

 

And yeah I think Microsoft gets the girl’s part on this one, but perhaps only because of:

Boy: I can live on bread and cheese.

Girl: And only on that?

Boy: Yup.

Girl: So can a rat.

By this I simply mean that everything that’s on the Linux stack is also on the Windows stack, plus Microsoft has its own proprietary equivalents that, in the opinions of most of its customers, are a lot better than the open source equivalents. Take PHP for example. Internet Information Server 7 does everything Apache can do plus host non-HTTP network applications, but it also does everything Apache does, functionally speaking, including configuration details and hosting PHP. But it also performs faster than Apache at hosting PHP applications with Fast-CGI and binary script caching installed and enabled. But beyond PHP, which in itself is technically not much more than ASP Classic (Javascript flavor), Microsoft’s ASP.NET is far more powerful and versatile than PHP, and it’s 100% free (after the cost of Windows itself). And don’t get me started about how much better I think Windows is at GUIs and graphics with GDI+, DirectX, and WPF, than the Linux flavors. (Apple, on the other hand, competes pretty well.)

Windows can also execute all the Java and Ruby stuff that you see in *nix platforms. In fact, Windows has all the UNIX subsystem underpinnings to make a UNIX enthusiast comfortable. The shell and all that fluff is a separate download but it’s all part of the Windows package and is free after the full Windows Ultimate or Windows Server license. You can snag Cygwin, too, if you like, if you want to get an even richer Linux-like experience.

So that’s Windows; you can go fully-licensed and get Windows 7 Ultimate ($219) + Windows Server 2008 R2 ($999) as a workstation + server combo for a total of $1,218 plus tax. However, if you’re in a position to care about that much money, I can tell you that you do not have to suffer that amount if you don’t want to.

First of all, Windows 7 Ultimate can perform just fine as a server. Windows Server 2008 is intended more for an enterprise environment that requires prison-like security and needs some very enterprisey or advanced features, such as hosting Active Directory domains, hosting Exchange Server, or hosting some unusual network services for developers with very specialized needs. If all your needs can be met with IIS and a database, so long as you don’t have a million hits a month (there is, unfortunately, simultaneous network connection count throttling built into Vista/7), you really don’t need anything more than Windows 7 Ultimate, no matter how many sites you host. It will scale, too, and in fact Windows 7 is built to handle tens of CPU cores. So, going Windows 7 only takes the total cost down to $219.

Second, if you really do want to go with the Server flavor, you have a couple more options, including a COMPLETELY FREE option which is very easily accessible, but I’ll get back to that later.

I just want to say, though, at this point, that I for one am already a Windows user, and you probably are too, statistically speaking. Our investments have already been made; however, only the Ultimate edition of Windows is one I would settle for as a “Microsoft stack” developer. Mind you, I’ve never had to pay the full price for any version of Windows in many, many years, yet I am currently running the latest and greatest. Again, I’ll get into that later.

Now let’s look at the development languages and the tools that support them.

Development Languages and Tools

The big names among the non-Microsoft platforms for languages and sub-platforms are:

  • PHP,
  • Ruby (on Rails),
  • Python, and
  • Java

Their tools come in many shapes and sizes. They can be as simple as vi or as complex as NetBeans. Many of the good tools people like to use are free. However, many of them are not.

For example, Aptana Studio is a very good web development IDE that supports Ruby, PHP, and Aptana’s own Javascript/AJAX platform called Jaxer, plus it runs in Eclipse so it supports Java as well. But the Pro version costs $99. That’s not free. There’s also JetBrains RubyMine which is also $99. On the other hand, Ruby developers tend to adore NetBeans, particularly over Aptana, and that is free. So go figure, to each his own.

The point is, if you want to get a rich and richly supported toolset, you’re just as likely going to have to pay for it in the non-Microsoft stacks.

On the Microsoft stack side, everyone knows about Visual Studio. The licensing cost for the Team Suite is $10,939. LAMP developers just love to point that kind of thing out. But folks, the fact is, that price is not measurable as the equivalent of LAMP freeware. It’s for an enterprise shop that needs very advanced and sophisticated tools for performing every corporate software role in a software development lifecycle. If you’re measuring the price here and it’s of concern to you, you probably don’t need to choose the most expensive offering to evaluate the costs of the MS stack!

First of all, the Professional edition of Visual Studio, if you’re crazy enough to have to pay for that out-of-pocket (i.e. not have your employer pay for it or get it in a bundled package such as one of the free ones) only costs $799, not $10,939.

Secondly, if money matters all that much to you, and you’re unable to get one of the free or nearly-free bundles (more on this in a bit), you really should push the limits of Visual Studio Express first. It’s free.

Experience Development Tools: Microsoft Expression vs. Adobe CS

Microsoft has been competing with what was Macromedia, now Adobe, for its designer-oriented web tools for a very long time, and finally came through with a reasonable offering with Expression Studio a few years back, which offers very close to the same functionality, at least at a basic level, for creating compelling web experiences as Adobe’s current CS4 Web Premium offering minus Photoshop.

Dreamweaver vs. Expression Web

A surprisingly large number of web designers use Adobe Dreamweaver (formerly Macromedia Dreamweaver) as their standard web creation tool, not far in similarity to the ubiquity of Adobe Photoshop for editing graphics. Microsoft has had an equivalent web creation tool for well over a decade. It used to be called FrontPage, now it is called Expression Web. But let’s get one thing clear: Expression Web replaces FrontPage, it is not a rename of FrontPage. It is, in fact, a different product that accomplishes the same task and in the same general way. By that I mean, as far as I know, very little of Expression Web’s codebase reuses FrontPage’s legacy codebase; it is a total rewrite and overhaul of both the tools and the rendering engine.

Expression Web supports PHP, in addition to its extensive support for ASP.NET and standards-based raw HTML and CSS. Technically, Expression Web is very close to being on par with Dreamweaver, and I think the differences are a matter of taste more than of function. I for one prefer the taste of Expression Web, and don’t know what Dreamweaver offers that Expression Web doesn’t.

Expression Studio includes Expression Design which is functionally equivalent, albeit to a much lesser extent, to Adobe Illustrator. The rest of the Expression Studio suite accomplishes most of the same functional tasks for web design and development as Adobe CS4 Web Premium Suite’s offering. So, to be functionally complete, you’d need to add a graphics editor to Expression Studio before Expression Studio can be compared with CS4.

As for the costs,

Expression Web: $149
Expression Studio + Paint.NET = $599 + $0 = $599
Expression Studio + Adobe Photoshop: $599 + $699 = $1,198

However, I get Expression Studio for free as it is bundled with my Microsoft suite package. More on this later.

Adobe Dreamweaver: $399
Adobe CS4 Web Premium Suite: $1,699

Silverlight vs. Flash

Inevitably, “the Microsoft stack” has to run into the Silverlight stack because Microsft pushes that product out, too. I’m not going to get into the religious debate over whether Adobe Flash is better than Microsoft Silverlight, except to say a couple very important things. First of all, I understand that it’s a no-brainer that everyone has Flash. 98% of the web’s user base has it. That said, supporting Microsoft Silverlight for your user base—that is, getting your users to obtain it—is not hard at all. So let’s just get that out of the way, okay? Yes, I know that Silverlight comes at this cost of a one-click install versus a no-click install. Life goes on.

Okay. Let’s talk about tools. With Adobe Flash, you have three options, really, for developing Flash solutions: 1) Adobe Flash Professional, 2) Adobe Flex (an Eclipse-based IDE for developing Flash-based applications), or 3) third-party apps like SWiSH. Fortunately, Adobe has recently been rumored to be planning on merging Flash Pro and Flex functionality, which is a relief because Flex did not have the design power of Flash Pro and Flash Pro didn’t have the development power of Flex. Meanwhile, though, Flash Pro and SWiSH are hardly tools I can take seriously as a software developer, and unfortunately, at $249, Flex is expensive.

Microsoft, however, offers the functionally equivalent toolset with the Expression suite and with Visual Studio. The Silverlight Tools for Visual Studio integrate with Visual Web Developer, providing Silverlight developers a completely free IDE for developing compelling Silverlight applications. So let’s get that out of the way: You do not need to spend a dime on dev tools to develop Silverlight apps.

Expression Blend, however, which is a commercial product and is functionally comparable to Adobe Flash Professional as well as, in my opinion, Apple’s Interface Builder (with which iPhone application interfaces are designed), is a rich designer tool for Silverlight as well as for WPF (Windows applications) and outputting XAML, the XML markup required for Silverlight and WPF applications. It provides a syntax-highlighting, IntelliSense (code completion) ready code editor for C# and Javascript code, too, so technically you could accomplish much using just Expression Blend, but Microsoft (and I do, too) recommends using Expression Blend in combination with Visual Studio / Visual Web Developer 2008 Express.

Microsoft Visual Web Developer 2008 Express with Silverlight Tools: $FREE
Microsoft Expression Blend: $599 (full Studio suite)
Together: $599
Microsoft Expression Professional Subscription (Expression Studio plus Windows, Visual Studio Standard Ed., Office, Virtual PC, and Parallels Desktop for Mac): $999

Adobe Flex Builder: $249
Adobe Flash Professional: $699 (standalone)
Together: $948

The long and short of it: in terms of cost savings, Silverlight development costs are on par with Flash development costs, but can in fact go a lot further per dollar including at the price of $FREE, depending on how much tooling you need.

Databases

Then there are the databases. The non-Microsoft stacks include primarily mySQL and PostgreSQL, et al. Mind you, these databases work fine in a Microsoft world, too, just like everything else, but the Microsoft stack tends to work best with Microsoft SQL Server.

Okay, let me just say at this point that Microsoft SQL Server 2008 is, by far, a vastly superior RDBMS than most anything I have seen from anyone, in every respect. Don’t get me wrong, I greatly admire mySQL and the other RDBMSs out there, but SQL Server is seriously the bomb.

But let’s talk about pricing. Just like Visual Studio has a prohibitively expensive offering available to enterprise users, SQL Server 2008 Standard Edition comes to us at a whopping $5,999. That’s just a hair less than the price of my Toyota when I bought it (used).

But, once again, there’s an expensive commercial offering for everything under the sun. MySQL also has a commercial offering at $599, which I’ll admit is only 1/10th the cost of SQL Server standard edition but isn’t exactly free either.

But seriously, who comparing development stacks actually pays for this stuff? Read on.

Everything Starts At Free

Technically, one could download the SDKs (for free) from Microsoft and do most anything. Most of it would be from the command line, but even XamlPad.exe is bundled in with the Windows SDK to you create XAML files for WPF with a WYSIWYG preview. (For Silverlight, you might try Kaxaml’s beta release.)

But who on the Microsoft stack wants to use the command line? If you’re new to the Microsoft development stack, the first place you should turn to is the Express suite, which includes among other things Visual C# Express, Visual Web Developer Express, and SQL Server Express. Empowered with each of the components of the Express suite, you as a developer have all the extremely powerful tools you need to accomplish almost any development task, with absolutely no licensing fees whatsoever. There really is no fine print with this; the Express editions have a few functional limitations that are very rarely (if ever) showstopping, and you’re not allowed to extend the Express product and try to sell your extension or to redistribute the Express products themselves, but there’s no pricing structure at all for any of the Express suite downloads.

I must say, the 2008 flavors of the Express products are, far and away, the most powerful software development solutions I’ve ever seen as a free offering, and definitely compete fairly with the likes of Eclipse and NetBeans in terms of providing what the typical developer needs to build a basic but complete product or solution without a software budget. Ironically, in my opinion, Microsoft specifically created a web site for the Express flavors of Visual Studio to make it all look crappy compared to Visual Studio Team Suite. The Express web site does not do these tools justice. Combined, the Express products are very rich and powerful, and the web site makes them look like a boy’s play dough or G.I. Joe.

I must include SQL Server 2008 Express in saying that the Express products are very rich and powerful, particularly if you get SQL Server 2008 Express with Advanced Services including Management Studio Express, this RDBMS suite is insanely powerful and complete, and is by far more capable and powerful than mySQL. And no, people, SQL Server Express does not come with licensing restrictions. It’s free, completely free. Free, period. It has a few technical/functional limitations, such as for example it cannot consume more physical RAM (not to be confused with database size) than 1 GB, and there are limitations to redistributing the Express products. But there is otherwise no licensing fine print. You can use it for commercial purposes. Have at it.

Beyond these Express versions, there’s also #develop (pronounced “SharpDevelop”). #develop is a non-Microsoft IDE for developing .NET applications on Windows, and it’s quite functional. Initially I think it was built for Mono in mind, but in the long run it never implemented Mono and instead Mono took some of #develop and made it MonoDevelop. #develop is a very well implemented IDE and is worth checking out, particularly given its free price. However, since #develop isn’t a Microsoft tool, it’s not really part of the Microsoft stack.

The Cheap And Free Bundle Package Deals

If the Express flavors aren’t good enough for you, now I get to mention how to get everything you might ever need—and I really mean everything, including Windows Server 2008 R2 Enterprise Edition, SQL Server 2008 Enterprise Edition, Visual Studio Team Suite with Team Foundation Server, and Expression Studio—for absolutely no cost whatsoever. The only catch is that you must be needing this (a free offering). If you don’t need it because you have a heckofalot of money, then, well, go get a life.

Microsoft is still giving away all the tools you need to rely on the Microsoft stack for absolutely no cost whatsoever through a package deal called BizSpark, which basically gives any start-up company—including one-to-five-man micro-ISVs like yourself(??)—an MSDN Subscription with fully licensed rights to use everything under the sun for development tools and operating systems for absolutely no cost (except for a $100 closing fee after a couple years I think?). If you’ve been struggling as a business for more than three years or if your revenue exceeds $1mil a year, you don’t qualify, otherwise if you intend to create a product (including a web site hosted on IIS) that’s core to your start-up, you do. It’s as simple as that. But don’t take my word for it, read the fine print yourself.

[Added 9/26/2009:] If you’re not a software business start-up but more of a web services start-up, creating a web site, or are a web designer, there’s a brand spanking new program for you, too, that’s just like BizSpark but targets you specifically. It’s called WebsiteSpark. I’m injecting mention of this into this blog post but already discussed it in a follow-up post; here are the basics: For a $100 offing fee (a fee that you pay when your license ends, rather than when it begins) you get Windows Web Server 2008 R2, SQL Server 2008 Web Edition, Visual Studio 2008 Professional Edition, and Expression Studio 3, and your license ends in three (3) years (same as BizSpark).

But let’s say you’re not really in business, you’re a college student, and you just need the software, without the pressure of being monitored for pursuing some kind of profit. Assuming that you are indeed in college, there’s hope for you, too, a complete suite of software for you including Windows Server 2008 Standard Edition, SQL Server 2008 Standard Edition, and Visual Studio 2008 Professional Edition, among other things, through a program called DreamSpark. All you need to qualify is to be a student. Congratulations.

An older program I took advantage of a few years ago, while Microsoft was still experimenting with these package deals, was the Empower program, which is like the BizSpark program but costs a few hundred bucks and doesn’t give you the ridiculously extensive Team Suite edition of Visual Studio. You basically have a year or two to enjoy it, and must offer a product within that timespan, after which point they drop you. But it was still a great deal considering the alternative outside of BizSpark was full-on full-priced licensing.

If you want a “normal everyday customer deal”, the MSDN subscription is still a good option. For about $1,200 for the Visual Studio Pro with Premium MSDN, you get everything under the sun (everything in BizSpark), except only the Team Suite flavor of Visual Studio. I’d save up my money for that even now if I didn’t already have what I needed.

Finally, if these still aren’t good enough for you, let me just say that if you work for an employer who provides an MSDN subscription directly to you as an employee (and I’ve had at least five or six employers do this in my career), and you go and use one of the unused licenses of one of the products under MSDN for your own personal use, unless Microsoft or your employer actually bother to check the download or activation history of your MSDN account, *psst* hey buddy, nobody will ever know. *wink* Seriously, don’t pirate. But hey I’m just sayin’. If you’re careful to only use the licenses that are not being and won’t be used (and in most cases with MSDN subscriptions there’s a ton of them), nobody will care.

Windows Web Hosting

All these things said, if you’re building a web site, you don’t likely need to buy Windows at all, other than the Windows instance on which you’re developing your app. You can rely on a third party web host just like nearly everyone else does. The price for hosting an ASP.NET app on a Windows-based server is typically about 20% more than the Linux offerings, but start at $4.99. You typically have to pay a little bit extra, as well, for extensive SQL Server requirements, but the basics are usually bundled in with these hosted deals.

The Costs Of Knowledge

Honestly, at $4.99 or even $10 a month, I don’t know what people would be complaining about. That’s a good price to host a Microsoft tools based solution. Sure, I can get a Linux hosted site running somewhere at as little as $2.99, but this comes at a prohibitive cost to me. First of all, I like most PC users (“most” being statistically speaking) am already familiar with Windows. In order to use Linux hosting effectively, one must explore and consume a lot of knowledge that otherwise has no relevance to my existing work-and-play environment.

Well let’s assume, then, that I know neither, and that I only use Windows for e-mail and web browsing. Let’s assume that I’m looking at PHP vs. ASP.NET and mySQL vs. SQL Server Express.

Linux proponents will say that you can dive right into PHP and mySQL because Linux doesn’t cost anything. But if you’re already running a moderately recent version of Windows, which statistically speaking you probably are, then this point is completely moot. Even with Windows XP (which is nearly a decade old and is showing its flatulent age) you can accomplish much with the tools that are already available to you.

At that point, then, which direction you should choose is going to be purely a matter of taste, vendor support, learning curve, and culture, because you can do pretty much anything on the Microsoft stack absolutely for free, or cheaper than the non-Microsoft alternative (i.e. Expression Studio vs. CS4), at every level, with no or very few strings attached.

I’d argue, then, that the cost of knowledge is the only significant cost factor if you already have Windows and you’re just doing your own thing. Both the Microsoft and the non-Microsoft user communities are strong and will assist you as you learn and grow. However, I prefer the Microsoft path specifically because the education, training materials, documentation, and, yes, marketing, all come from one vendor. It’s not lock-in that I want, not at all, so much as it’s the consistency that I enjoy (not to mention the intuitiveness of the Microsoft platform at every level from a user’s perspective). Everything starts with MSDN and Microsoft employees’ blogs, for example, and from there I get everything I need from help on how to use new C# language features to how to use Visual Studio to how to configure or extend IIS. Whereas, with the LAMP community, everything is fragmented and fractured. If that’s your preferred style, great. Just keep in mind that Windows can do everything you’re already doing in Linux. ;)

[Added 9/26/2009:] As I mentioned (er, injected) above under “The Cheap And Free Package Deals”, Microsoft just created a new program called WebsiteSpark. In addition to the Windows, SQL Server, Visual Studio, and Expression Studio licenses, you also get professional training. This training is still “coming soon”, I suspect it’ll be online training, but it’s professionally produced training nonetheless (no doubt).

Discussions In The Community

Browsing the comments at http://stackoverflow.com/questions/1370834/why-is-microsoft-stack-said-to-be-costly/1376168#1376168 infuriates me. This is actually the reason why I felt compelled to post this blog article. I am so sick and tired of the FUD that ignorant anti-Microsoft proponents keep pumping out. I’m going to assume that the OP’s context was for web applications, but it doesn’t matter much either way.

  • “But still, Linux hosting is cheaper than Windows hosting at pretty much every level.”

Ahh yes, web hosting. At $4.99 or even at $10 per month I really don’t care.

If we’re talking about VPS or dedicated server hosting, that’s another story. Let’s just say I have a Linux VPS I pay $30/mo. or so for, but I really don’t use it for much because it just doesn’t do enough for me reliably and intuitively, and meanwhile this blog is hosted on a $160/mo. virtual dedicated server (hosted) with Windows Server 2008, but it’s heavily used. I feel I get what I pay for.

  • “Linux hosting is almost always cheaper for the simple reason that the MS stack costs the host more to license (which is the point of most posts). Also you don't get development tools with a hosting service. Let's not forget that you're also liking going to need a more expensive "Ultimate Developer, Don't Gimp It" version of Windows desktop to run the dev tools.”

I don’t know what “Ultimate Developer, Don’t Gimp It” means, but I do agree that Ultimate is the best flavor of Windows to do development on. However, you don’t need Ultimate edition to do Microsoft stack development. Visual Web Developer (which is free) comes with its own test web server and installs fine on Windows XP Service Pack 2 or on Windows Vista Home Basic. And its output works great at targeting Windows based web hosts.

  • “I've heard of express editions. I've even downloaded some. I seem to remember a license condition about non-commercial use, although I may be wrong. I don't think the express editions are particularly good for commercial development in any case.”

Hogwash. The Visual Studio Express editions are blatantly characterized on Microsoft’s pathetic Express web site as being cheap, simple, and even a little crappy, but in fact they are extremely functional and capable of doing much more than “hobbyist” solutions. The suite is really very powerful and I for one believe that if Microsoft only had the Express suite and sold it as their commercial offering it would still be a powerful, viable platform for many shops. And yes, you’re allowed to use it for commercial development, and it works great for it.

However, as described above, there are ways to get the Professional and Team Suite editions of Visual Studio and SQL Server Developer Edition (full) without shelling out a lot or even any money.

  • “I don't know Microsoft's specific licensing policies (I can assume they are pretty reasonable), but I can tell you that developer tools are often more pricey than you'd imagine when you start licensing for your company.
    Often when you start buying developer licenses for teams of, say, 20-50 you are starting to talk about millions of dollars up front costs. $100,000 per developer wouldn't be unheard of (not counting the often mandatory annual support fees which can double that number easily).”

Ridiculous. $10,000, which is a tenth of what this guy said, is all it costs to get everything under the sun without one of the special deals like BizSpark. And if you have a team of that size and you’re an established corporation, it would be below you to still be asking the question, “Is the Microsoft stack really more expensive?” It will be business. And I must say, Microsoft doesn’t suck at supporting its fully-paying customers.

At any rate, I must say again, BizSpark (bundled suite of everything) is completely free, with a $100 closing fee.

  • “If you want to use ASP.NET you need
    • IIS
    • A server with Windows (for IIS)
    • Visual Studio
    • A work station with Windows for Visual Studio

    If you want to use PHP, Perl, Mono, Ruby... you need

    • A web server that supports the technology wanted. May be Apache, IIS...
    • An OS that supports your weberver
    • A workstation with any Linux, Window or mac”

This is silliness. If you want to use ASP.NET, you can go Mono all the way on Mac or Linux and never touch Windows or IIS. But ASP.NET wasn’t the discussion; the Microsoft stack was the discussion.

The Microsoft stack infers Microsoft being the vendor at every primary level of the software stack. So of course you need Windows. (And for the third or fourth time, statistically speaking you probably already have it.) And Mono wouldn’t count because it’s not Microsoft, so of course you need IIS. #develop (SharpDevelop) and other non-Microsoft development IDEs don’t count because they’re not Microsoft, so of course you would probably use Visual Studio.

On the other hand, “needing IIS” has no meaning because it’s a part of Windows, it’s like saying you need a hard drive, plus you need a computer (to contain the hard drive). It comes at no cost. It’s not a product, it’s a technology component of Windows.

Visual Studio is also not needed, rather it’s available as an option, and its Express flavors are free. You can also use vi, emacs, Notepad.exe, whatever you like. There is literally nothing that LAMP developers enjoy in their development lifecycle that they cannot establish with the Microsoft stack. If you want to write in vi and compile with a command line using ant and make, great, use vi and NAnt and NMake or MSBuild. If you like your command shells, great, most of the Linux command shells are available in Windows, plus Windows’ PowerShell. Have at it. But please, please don’t assume that you have to use Visual Studio if you use the Microsoft stack but you get to use simpler tools for LAMP development. The Microsoft stack has all those simpler tools at its disposal, too. (Yes, all for free, with the Windows SDK.)

  • “I don't think they're talking about the time required to develop on the Microsoft stack. They're talking about the cost of:
    • tools (Visual Studio, Resharper);
    • operating systems (Windows Vista, Windows Server); and
    • databases (SQL Server 2005/2008).”

*sigh* Need I say more and repeat myself? And if Resharper was available for PHP/Ruby, and I was doing PHP/Ruby development, I’d pay for that, too.

 


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  October 2019  >>
MoTuWeThFrSaSu
30123456
78910111213
14151617181920
21222324252627
28293031123
45678910

View posts in large calendar