Esent: The Decade-Old Database Engine That Windows (Almost) Always Had

by Jon Davis 30. August 2010 03:08

Windows has a technology I just stumbled upon that should make a few *nix folks jealous. It’s called Esent. It’s been around since Windows 2000 and is still alive and strong in Windows 7.

What is Esent, you ask? It’s a database engine. No, not like SQL Server, it doesn’t do the T-SQL language. But it is an ISAM database, and it’s got several features of a top-notch database engine. According to this page,

ESENT is an embeddable, transactional database engine. It first shipped with Microsoft Windows 2000 and has been available for developers to use since then. You can use ESENT for applications that need reliable, high-performance, low-overhead storage of structured or semi-structured data. The ESENT engine can help with data needs ranging from something as simple as a hash table that is too large to store in memory to something more complex such as an application with tables, columns, and indexes.

Many teams at Microsoft—including The Active Directory, Windows Desktop Search, Windows Mail, Live Mesh, and Windows Update—currently rely on ESENT for data storage. And Microsoft Exchange stores all of its mailbox data (a large server typically has dozens of terrabytes of data) using a slightly modified version of the ESENT code.

Significant technical features of ESENT include:

  • ACID transactions with savepoints, lazy commits, and robust crash recovery.
  • Snapshot isolation.
  • Record-level locking (multi-versioning provides non-blocking reads).
  • Highly concurrent database access.
  • Flexible meta-data (tens of thousands of columns, tables, and indexes are possible).
  • Indexing support for integer, floating point, ASCII, Unicode, and binary columns.
  • Sophisticated index types, including conditional, tuple, and multi-valued.
  • Columns that can be up to 2GB with a maximum database size of 16TB.

Note: The ESENT database file cannot be shared between multiple processes simultaneously. ESENT works best for applications with simple, predefined queries; if you have an application with complex, ad-hoc queries, a storage solution that provides a query layer will work better for you.

Wowza. I need 16TB databases, I use those all the time. LOL.

My path to stumbling upon Esent was first by looking at RavenDB, which is rumored to be built upon Esent as its storage engine. Searching for more info on Esent, I came across ManagedEsent, which provides a crazy-cool PersistentDictionary and exposes the native Esent with an API wrapper.

To be quite honest, the Jet-prefixed API points look to be far too low-level for my interests, but some of the helper classes are definitely a step in the right direction in making this API more C#-like.

I’m particularly fascinated, however, by the PersistentDictionary. It’s a really neat, simple way to persist ID’d serializable objects to the hard drive very efficiently. Unfortunately it is perhaps too simple; it does not do away with NoSQL services that provide rich document querying and indexing.

Looks like someone over there at Microsoft who plays with Esent development is blogging:

Currently rated 4.1 by 8 people

  • Currently 4.125/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5


Microsoft Windows | Software Development

Four Methods Of Simple Caching In .NET

by Jon Davis 30. August 2010 01:07

Caching is a fundamental component of working software. The role of any cache is to decrease the performance footprint of data I/O by moving it as close as possible to the execution logic. At the lowest level of computing, a thread relies on a CPU cache to be loaded up each time the thread context is switched. At higher levels, caches are used to offload data from a database into a local application memory store or, say, a Lucene directory for fast indexing of data.

Although there are some decent hashtable-become-dictionary scenarios in .NET as it has evolved, until .NET 4.0 there was never a general purpose caching table built directly into .NET. By “caching table” I mean a caching mechanism where keyed items get put into it but eventually “expire” and get deleted, whether by:

a) a “sliding” expiration whereby the item expires in a timespan-determined amount of time from the time it gets added,

b) an explicit DateTime whereby the item expires as soon as that specific DateTime passes, or

c) the item is prioritized and is removed from the collection when the system needs to free up some memory.

There are some methods people can use, however.

Method One: ASP.NET Cache

ASP.NET, or the System.Web.dll assembly, does have a caching mechanism. It was never intended to be used outside of a web context, but it can be used outside of the web, and it does perform all of the above expiration behaviors in a hashtable of sorts.

After scouring Google, it appears that quite a few people who have discussed the built-in caching functionality in .NET have resorted to using the ASP.NET cache in their non-web projects. This is no longer the most-available, most-supported built-in caching system in .NET; .NET 4 has an ObjectCache which I’ll get into later. Microsoft has always been adamant that the ASP.NET cache is not intended for use outside of the web. But many people are still stuck in .NET 2.0 and .NET 3.5, and need something to work with, and this happens to work for many people, even though MSDN says clearly:


The Cache class is not intended for use outside of ASP.NET applications. It was designed and tested for use in ASP.NET to provide caching for Web applications. In other types of applications, such as console applications or Windows Forms applications, ASP.NET caching might not work correctly.

The class for the ASP.NET cache is System.Web.Caching.Cache in System.Web.dll. However, you cannot simply new-up a Cache object. You must acquire it from System.Web.HttpRuntime.Cache.

Cache cache = System.Web.HttpRuntime.Cache;

Working with the ASP.NET cache is documented on MSDN here.


  1. It’s built-in.
  2. Despite the .NET 1.0 syntax, it’s fairly simple to use.
  3. When used in a web context, it’s well-tested. Outside of web contexts, according to Google searches it is not commonly known to cause problems, despite Microsoft recommending against it, so long as you’re using .NET 2.0 or later.
  4. You can be notified via a delegate when an item is removed, which is necessary if you need to keep it alive and you could not set the item’s priority in advance.
  5. Individual items have the flexibility of any of (a), (b), or (c) methods of expiration and removal in the list of removal methods at the top of this article. You can also associate expiration behavior with the presence of a physical file.


  1. Not only is it static, there is only one. You cannot create your own type with its own static instance of a Cache. You can only have one bucket for your entire app, period. You can wrap the bucket with your own wrappers that do things like pre-inject prefixes in the keys and remove these prefixes when you pull the key/value pairs back out. But there is still only one bucket. Everything is lumped together. This can be a real nuisance if, for example, you have a service that needs to cache three or four different kinds of data separately. This shouldn’t be a big problem for pathetically simple projects. But if a project has any significant degree of complexity due to its requirements, the ASP.NET cache will typically not suffice.
  2. Items can disappear, willy-nilly. A lot of people aren’t aware of this—I wasn’t, until I refreshed my knowledge on this cache implementation. By default, the ASP.NET cache is designed to destroy items when it “feels” like it. More specifically, see (c) in my definition of a cache table at the top of this article. If another thread in the same process is working on something completely different, and it dumps high-priority items into the cache, then as soon as .NET decides it needs to require some memory it will start to destroy some items in the cache according to their priorities, lower priorities first. All of the examples documented here for adding cache items use the default priority, rather than the NotRemovable priority value which keeps it from being removed for memory-clearing purposes but will still remove it according to the expiration policy. Peppering CacheItemPriority.NotRemovable in cache invocations can be cumbersome, otherwise a wrapper is necessary.
  3. The key must be a string. If, for example, you are caching data records where the records are keyed on a long or an integer, you must convert the key to a string first.
  4. The syntax is stale. It’s .NET 1.0 syntax, even uglier than ArrayList or Hashtable. There are no generics here, no IDictionary<> interface. It has no Contains() method, no Keys collection, no standard events; it only has a Get() method plus an indexer that does the same thing as Get(), returning null if there is no match, plus Add(), Insert() (redundant?), Remove(), and GetEnumerator().
  5. Ignores the DRY principle of setting up your default expiration/removal behaviors so you can forget about them. You have to explicitly tell the cache how you want the item you’re adding to expire or be removed every time you add add an item.
  6. No way to access the caching details of a cached item such as the timestamp of when it was added. Encapsulation went a bit overboard here, making it difficult to use the cache when in code you’re attempting to determine whether a cached item should be invalidated against another caching mechanism (i.e. session collection) or not.
  7. Removal events are not exposed as events and must be tracked at the time of add.
  8. And if I haven’t said it enough, Microsoft explicitly recommends against it outside of the web. And if you’re cursed with .NET 1.1, you not supposed to use it with any confidence of stability at all outside of the web so don’t bother.

Method Two: The Enterprise Library Caching Application Block

Microsoft’s recommendation, up until .NET 4.0, has been that if you need a caching system outside of the web then you should use the Enterprise Library Caching Application Block.

The Microsoft Enterprise Library is a coordinated effort between Microsoft and a third party technology partner company Avanade, mostly the latter. It consists of multiple meet-many-needs general purpose technology solutions, some of which are pretty nice but many of which are in an outdated format such as a first-generation O/RM that doesn’t support LINQ.

I have personally never used the EntLib Caching App Block. It looks bloated. Others on the web have commented that they thought it was bloated, too, but once they started using it they saw that it was pretty simple and straightfoward. I personally am not sold, but since I haven’t tried it I cannot pass a fair judgment. So for whatever its worth, here are some pros/cons:


  1. Recommended by Microsoft as the cache mechanism for non-web projects.
  2. The syntax looks to be familiar to those who were using the ASP.NET cache.


  1. Not built-in. The assemblies must be downloaded and separately referenced.
  2. Not actually created by Microsoft. (EntLib is an Avanade solution that Microsoft endorses as its own.)
  3. The syntax is the same .NET 1.0 style stuff that I believe defaults to normal priority (auto-ejects the items when memory management feels like it) rather than NotRemovable priority, and does not reinforce the DRY principles of setting up your default expiration/removal behaviors so you can forget about them.
  4. Potentially bloated.

Method Three: .NET 4.0’s ObjectCache / MemoryCache

Microsoft finally implemented an abstract ObjectCache class in the latest version of the .NET Framework, and a MemoryCache implementation that inherits and implements ObjectCache for in-memory purposes in a non-web setting.

System.Runtime.Caching.ObjectCache is in the System.Runtime.Caching.dll assembly. It is an abstract class that that declares basically the same .NET 1.0 style interfaces that are found in the ASP.NET cache. System.Runtime.Caching.MemoryCache is the in-memory implementation of ObjectCache and is very similar to the ASP.NET cache, with a few changes.

To add an item with a sliding expiration, your code would look something like this:

var config = new NameValueCollection(); 
var cache = new MemoryCache("myMemCache", config); 
cache.Add(new CacheItem("a", "b"), 
new CacheItemPolicy 
Priority = CacheItemPriority.NotRemovable, 


  1. It’s built-in, and now supported and recommended by Microsoft outside of the web.
  2. Unlike the ASP.NET cache, you can instantiate a MemoryCache object instance.
    Note: It doesn’t have to be static, but it should be—that is Microsoft’s recommendation (see yellow Caution).
  3. A few slight improvements have been made vs. the ASP.NET cache’s interface, such as the ability to subscribe to removal events without necessarily being there when the items were added, the redundant Insert() was removed, items can be added with a CacheItem object with an initializer that defines the caching strategy, and Contains() was added.


  1. Still does not fully reinforce DRY. From my small amount of experience, you still can’t set the sliding expiration TimeSpan once and forget about it. And frankly, although the policy in the item-add sample above is more readable, it necessitates horrific verbosity.
  2. It is still not generically-keyed; it requires a string as the key. So you can’t store as long or int if you’re caching data records, unless you convert to string.

Method Four: Build One Yourself

It’s actually pretty simple to create a caching dictionary that performs explicit or sliding expiration. (It gets a lot harder if you want items to be auto-removed for memory-clearing purposes.) Here’s all you have to do:

  1. Create a value container class called something like Expiring<T> or Expirable<T> that would contain a value of type T, a TimeStamp property of type DateTime to store when the value was added to the cache, and a TimeSpan that would indicate how far out from the timestamp that the item should expire. For explicit expiration you can just expose a property setter that sets the TimeSpan given a date subtracted by the timestamp.
  2. Create a class, let’s call it ExpirableItemsDictionary<K,T>, that implements IDictionary<K,T>. I prefer to make it a generic class with <K,T> defined by the consumer.
  3. In the the class created in #2, add a Dictionary<K,Expiring<T>> as a property and call it InnerDictionary.
  4. The implementation if IDictionary<K,T> in the class created in #2 should use the InnerDictionary to store cached items. Encapsulation would hide the caching method details via instances of the type created in #1 above.
  5. Make sure the indexer (this[]), ContainsKey(), etc., are careful to clear out expired items and remove the expired items before returning a value. Return null in getters if the item was removed.
  6. Use thread locks on all getters, setters, ContainsKey(), and particularly when clearing the expired items.
  7. Raise an event whenever an item gets removed due to expiration.
  8. Add a System.Threading.Timer instance and rig it during initialization to auto-remove expired items every 15 seconds. This is the same behavior as the ASP.NET cache.
  9. You may want to add an AddOrUpdate() routine that pushes out the sliding expiration by replacing the timestamp on the item’s container (Expiring<T> instance) if it already exists.

Microsoft has to support its original designs because its user base has built up a dependency upon them, but that does not mean that they are good designs.


  1. You have complete control over the implementation.
  2. Can reinforce DRY by setting up default caching behaviors and then just dropping key/value pairs in without declaring the caching details each time you add an item.
  3. Can implement modern interfaces, namely IDictionary<K,T>. This makes it much easier to consume as its interface is more predictable as a dictionary interface, plus it makes it more accessible to helpers and extension methods that work with IDictionary<>.
  4. Caching details can be unencapsulated, such as by exposing your InnerDictionary via a public read-only property, allowing you to write explicit unit tests against your caching strategy as well as extend your basic caching implementation with additional caching strategies that build upon it.
  5. Although it is not necessarily a familiar interface for those who already made themselves comfortable with the .NET 1.0 style syntax of the ASP.NET cache or the Caching Application Block, you can define the interface to look like however you want it to look.
  6. Can use any type for keys. This is one reason why generics were created. Not everything should be keyed with a string.


  1. Is not invented by, nor endorsed by, Microsoft, so it is not going to have the same quality assurance.
  2. Assuming only the instructions I described above are implemented, does not “willy-nilly” clear items for clearing memory on a priority basis (which is a corner-case utility function of a cache anyway .. BUY RAM where you would be using the cache, RAM is cheap).


Among all four of these options, this is my preference. I have implemented this basic caching solution. So far, it seems to work perfectly, there are no known bugs (please contact me with comments below or at jon-at-jondavis if there are!!), and I intend to use it in all of my smaller side projects that need basic caching. Here it is: 


Worthy Of Mention: AppFabric, NoSQL, Et Al

Notice that the title of this blog article indicates “Simple Caching”, not “Heavy-Duty Caching”. If you want to get into the heavy-duty stuff, you should look at memcached, AppFabric, ScaleOut, or any of the many NoSQL solutions that are shaking up the blogosphere and Q&A forums.

By “heavy duty” I mean scaled-out solutions, as in scaled across multiple servers. There are some non-trivial costs associated with scaling out

Scott Hanselman has a blog article called, “Installing, Configuring and Using Windows Server AppFabric and the ‘Velocity’ Memory Cache in 10 Minutes”. I already tried installing AppFabric, it was a botched and failed job, but I plan to try to tackle it again, now that hopefully Microsoft has updated their online documentation and provided us with some walk-throughs, i.e. Scott’s.

As briefly hinted in the introduction of this article, another approach to caching is using something like Lucene.Net to store database data locally. Lucene calls itself a “search engine”, but really it is a NoSQL [Wikipedia link] storage mechanism in the form of a queryable index of document-based tables (called “directories”).

Other NoSQL options abound, most of them being Linux-oriented like CouchDB (which runs but stinks on Windows) but not all of them. For example, there’s RavenDB from Oren Eini (author of the popular blog) which is built entirely in and for .NET as a direct competitor to the likes of There are a whole bunch of other NoSQL options listed over here.

I have had the pleasure of working with and the annoyance of poking at CouchDB on Windows (Linux folks should not write Windows software, bleh .. learn how to write Windows services, folks), but not much else. Perhaps I should take a good look at RavenDB next.


Currently rated 3.6 by 20 people

  • Currently 3.55/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5


C# | Software Development

I Don’t Much Get Go

by Jon Davis 24. August 2010 01:53

When Google announced their new Go programming language, I was quite excited and happy. Yay, another language to fix all the world’s problems! No more suckage! Suckage sucks! Give me a good language that doesn’t suffer suckage, so that my daily routine can suck less!

And Google certainly presented Go as “a C++ fixxer-upper”. I just watched this video of Rob Pike describing the objectives of Go, and I can definitely say that Google’s mantra still lines up. The video demonstrates a lot of evils of C++. Despite some attempts at self-edumacation, I personally cannot read nor write real-world C++ because I get lost in the gobbligook that he demonstrated.

But here’s my comment on YouTube:

100% of the examples of "why C++ and Java suck" are C++. Java's not anywhere near that bad. Furthermore, The ECMA open standard language known as C#--brought about by a big, pushy company no different in this space than Google--already had the exact same objectives as Java and now Go had, and it has actually been fundamentally evolving at the core, such as to replace patterns with language features (as seen in lambdas and extension methods). Google just wanted to be INDEPENDENT, typical anti-MS.

While trying to take Go in with great excitement, I’ve been forced to conclude that Google’s own message delivery sucks, mainly by completely ignoring some of the successful languages in the industry—namely C# (as well as some of the lesser-used excellent languages out there)—much like Microsoft’s message delivery of C#’s core objectives somewhat sucked by refraining from mentioning Java even once when C# was announced (I spent an hour looking for the C# announcement white paper from 2000/2001 to back up this memory but I can’t find it). The video linked above doesn’t even show a single example of Java suckage; it just made these painful accusations of Java being right in there with C++ as being a crappy language to work with. I haven’t coded in Java in about a decade, honestly, but back then Java was the shiznat and code was beautiful, elegant, and more or less easy to work with.

Meanwhile, Java has been evolving in some strange ways and ultimately I find it far less appetizing than C#. But where is Google’s nod to C#? Oh that’s right, C# doesn’t exist, it’s a fragment of someone’s imagination because Google considers Microsoft (C#’s maintainer) a competitor, duh. This is an attitude that should make anyone automatically skeptical of the language creator’s true intentions, and therefore of the language itself. C# actually came about in much the same way as Go did as far as trying to “fix” C++. In fact, most of the problems Go describes of C++ were the focus of C#’s objectives, along with a few thousand other objectives. Amazingly, C# has met most of its objectives so far.

If we break down Google’s objectives themselves, we don’t see a lot of meat. What we find, rather, are Google employees trying to optimize their coding workflow for previously C++ development efforts using perhaps emacs or vi (Rob even listed IDEs as a failure in modern languages). Their requirements in Go actually appear to be rather trivial. It seems that they want to write quick-and-easy C-syntax-like code that doesn’t get in the way of their business objectives, that performs very fast, and fast compilation that lets them escape out of vi to invoke gcc or whatever compiler very quickly and go back to coding. These are certainly great nice-to-haves, but I’m pretty sure that’s about it.

Consider, in contrast, .NET’s objectives a decade ago, .NET being at the core of applied C# as C# runs on the CLR (the .NET runtime):

  • To provide a very high degree of language interoperability
    • Visual Basic and C++ and Java, oh my! How do we get them to talk to each other with high performance?
    • COM was difficult to swallow. It didn’t suck because its intentions were gorgeous—to have a language-netural marshalling paradigm between runtimes—but then the same objectives were found in CORBA, and that sucked.
    • Go doesn’t even have language interoperability. It has C (and only C) function invocators. Bleh! Google is not in the real world!
  • To provide a runtime environment that completely manages code execution
    • This in itself was not a feature, it was a liability. But it enabled a great deal, namely consolidating QA resources for low-level functionality, which in turn brought about instantaneous quality and productivity on Microsoft’s part across the many languages and the tools because fewer resources had to focus on duplicate details.
    • The Mono runtime can run a lot of languages now. It is slower than C++, but not by a significant level. A C# application, fully ngen’d (precompiled to machine-level code), will execute at roughly 90-95% of C++’s and thus theoretically Go’s performance, which frankly is pretty darn good.
  • To provide a very simple software deployment and versioning model
    • A real-world requirement which Google in its corporate and web sandboxes is oblivious to, I’m not sure that Go even has a versioning model
  • To provide high-level code security through code access security and strong type checking
    • Again, a real-world requirement which Google in its corporate and web sandboxes is oblivious to, since most of their code is only exposed to the public via HTML/REST/JSON/SOAP.
  • To provide a consistent object-oriented programming model
    • It appears that Go is not an OOP language. There is no class support in Go. No objects at all, really. Just primitives, arrays, and structs. Surpriiiiise!! :D
  • To facilitate application communication by using industry standards such as SOAP and XML.
  • To simplify Web application development
    • I really don’t see Google innovating here, instead they push Python and Java on their app cloud? I most definitely don’t see this applying to Go at all.
  • To support hardware independence and portability
    • Although the implementation of this (JIT) is a liability, the objective is sound. Old-skool Linux folks didn’t get this; it’s stupid to have to recompile an application’s distribution, software should be precompiled.
    • Java and .NET are on near-equal ground here. When Java originally came about, it was the silver bullet for “Write Once, Run Anywhere”. With the successful creation and widespread adoption of the Mono runtime, .NET has the same portability. Go, however, requires recompilation. Once again, Google is not out in the real world, they live in a box (their headquarters and their exposed web).

And with the goals of C#,

  • C# language is intended to be a simple, modern, general-purpose, object-oriented programming language.
    • Go: “OOP is cruft.”
  • The language, and implementations thereof, should provide support for software engineering principles such as strong type checking, array bounds checking, detection of attempts to use uninitialized variables, and automatic garbage collection. Software robustness, durability, and programmer productivity are important.
    • Go: “Um, check, maybe. Especially productivity. Productivity means clean code.”
    • (As I always say, the more you know, the more you realize how little you know. Clearly you think you’ve got it all down, little Go.)
  • The language is intended for use in developing software components suitable for deployment in distributed environments.
    • Go: “Yeah we definitely want that. We’re Google.”
  • Source code portability is very important, as is programmer portability, especially for those programmers already familiar with C and C++.
    • Go: “Just forget C++. It’s bad. But the core syntax (curly braces) is much the same, so ... check!”
  • Support for internationalization is very important.
  • C# is intended to be suitable for writing applications for both hosted and embedded systems, ranging from the very large that use sophisticated operating systems, down to the very small having dedicated functions.
    • Go: “Check!”
    • (Yeah, except that Go isn’t an applications platform. At all. So, no. Uncheck that.)
  • Although C# applications are intended to be economical with regard to memory and processing power requirements, the language was not intended to compete directly on performance and size with C or assembly language.

Right now, Go just looks like a syntax with a few basic support classes for I/O and such. I must confess I was somewhat unimpressed by what I saw at Go’s web site ( because the language does not look like much of a readability / maintainability improvement to what Java and C# offered up.

  • Go supposedly offers up memory management, but still heavily uses pointers. (C# supports pointers, too, by the way, but since pointers are not safe you must declare your code as “containing unsafe code”. Most C# code strictly uses type-checked references.)
  • Go eliminates semicolons as statement terminators. “…they are inserted automatically at the end of every line that looks like the end of a statement…” Sorry, but semicolons did not make C++ unreadable or unmaintainable
    Personally I think code without punctuation (semicolons) looks like English grammar without punctuations (no period)
    You end up with what look like run-on sentences
    Of course they’re not run-on sentences, they’re just lazily written ones with poor grammar
    wat next, lolcode?
  • “{Sample tutorial code} There is no implicit this and the receiver variable must be used to access members of the structure.” Wait, what, what? Hey, I have an idea, let’s make all functions everywhere static!
  • Actually, as far as I can tell, Go doesn’t have class support at all. It just has primitives, arrays, and structs.
  • Go uses the := operator syntax rather than the = operator for assignment. I suppose this would help eliminate the issue where people would type = where they meant to type == and destroy their variables.
  • Go has a nice “defer” statement that is akin to C#’s using() {} and try...finally blocks. It allows you to be lazy and disorganized such that late-executed code that should called after immediate code doesn’t require putting it below immediate code, we can just sprinkle late-executed code in as we go. We really needed that. (Except, not.) I think defer’s practical applicability is for some really lightweight AOP (Aspect Oriented Programming) scenarios, except that defer is a horrible approach to it.
  • Go has both new() and make(). I feel like I’m learning C++ again. It’s about those pesky pointers ...
    • Seriously, how the heck is
        var p *[]int = new([]int) // allocates slice structure; *p == nil; rarely useful
        var v  []int = make([]int, 100) // the slice v now refers to a new array of 100 ints

      .. a better solution to “improving upon” C++ with a new language than, oh I don’t know ..
        int[] p = null; // declares an array variable; p is null; rarely useful
        var v = new int[100]; // the variable v now refers to a new array of 100 ints

      ..? I’m sure I’m missing something here, particularly since I don’t understand what a “slice” is, but I suspect I shouldn’t care. Oh, nevermind, I see now that it “is a three-item descriptor containing a pointer to the data (inside an array), the length, and the capacity; until those items are initialized, the slice is nil.” Great. More pointer gobbligook. C# offers richly defined System.Array and all this stuff is transparent to the coder who really doesn’t need to know that there are pointers, somewhere, associated with the reference to your array, isn’t that the way it all should be? Is it really necessary to have a completely different semantic (new() vs. make())? Ohh yeah. The frickin pointer vs. the reference.
  • I see Go has a fmt.Printf(), plus a fmt.Fprintf(), plus a fmt.Sprintf(), plus Print() plus Println(). I’m beginning to wonder if function overloading is missing in Go. I think it is;
  • Go has “goroutines”. It’s basically, “go func() { /* do stuff */ }” and it will execute the code as a function on the fly, in parallel. In C# we call these anonymous delegates, and delegates can be passed along to worker thread pool threads on the fly with only one line of code, so yes, it’s supported. F# (a young .NET sibling of C#) has this, too, by the way, and its support for inline anonymous delegate declarations and spawning them off in parallel is as good as Go’s.
  • Go has channels for communication purposes. C# has WCF for this which is frankly a mess. The closest you can get to Go on the CLR as far as channels go is Axum, which is variation of C# with rich channel support.
  • Go does not throw exceptions. It panics, from which it might recover.

While I greatly respect the contributions Google has made to computing science, and their experience in building web-scalable applications (that, frankly, typically suck at a design level when they aren’t tied to the genius search algorithms), and I have no doubt that Google is an experienced web application software developer with a lot of history, honestly I think they are clueless when it comes to real-world applications programming solutions. Microsoft has been demonized the world over since its beginnings, but one thing they and few others have is some serious, serious real-world experience with applications. Between all of the web sites and databases and desktop applications combined everywhere on planet Earth through the history of man, Microsoft has probably been responsible for the core applications plumbing for the majority of it all, followed perhaps by Oracle. (Perhaps *nix and applications and services that run on it has been the majority; if nothing else, Microsoft has most certainly still had the lead in software as a company, to which my point is targeted.)

It wasn’t my intention to make this a Google vs. Microsoft debate, but frankly the fact that Go presentations neglect C# severely causes question to Go’s trustworthiness.

In my opinion, a better approach to what Google was trying to do with Go would be to take a popular language, such as C#, F#, or Axum, and break it away from the language’s implementation libraries, i.e. the .NET platform’s BCL, replacing them with the simpler constructs, support code, and lightweight command-line tooling found in Go, and then wrap the compiler to force it to natively compile to machine code (ngen). Honestly, I think that would be both a) a much better language and runtime than Go because it would offer most of the benefits of Go but in a manner that retains most or all of the advantages of the selected runtime (i.e. the CLR’s and C#’s multitude of advantages over C/C++), but also b) a flop, and a waste of time, because C# is not really broken. Coupled with F#, et al, our needs are quite well met. So thanks anyway, Google, but, really, you should go now.

Currently rated 2.9 by 7 people

  • Currently 2.857143/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , ,

C# | F# | General Technology | Mono | Opinion | Software Development

Chase Bank, The Lowest-Security Bank Ever

by Jon Davis 14. August 2010 23:44

Just filed this complaint via Chase Bank’s “Secure Message Center”:

I am writing to complain about your service. Please note that this is not a request for assistance, I only ask that you pass this complaint along to authorities in your security department.

I recently received an e-mail asking for transaction confirmation. The text of the e-mail reads (in part):

"As part of our ongoing effort to protect your account and our relationship, we monitor your account for possible fraudulent activity. We have recently attempted to contact you by phone and/or text message but we have been unsuccessful in reaching you. We need to confirm that you or someone authorized to use your account made the following transaction on your Chase Visa account ending in .....

"Please click on one of the two statements below to indicate if this transaction was authorized:

"[Transaction Authorized]"
"[Transaction NOT Authorized]"

I am sorry, but you came *extremely* close to losing me as a customer due to this e-mail. The “Transaction Authorized” link redirects to a site at host Who is You already raised an alarm, you have to now be trustworthy, Chase, authenticity is now required of YOU! Navigating directly to to validate the authenticity of this domain, Google Chrome showed me the Red Screen of Death, indicating that this site is NOT TRUSTED by Google and should NOT be trusted by me. (The reason for the mistrust by Google Chrome is that the HTTP response headers indicate that the server is

Since I had already clicked on the link, I scoured the web to see if there was any recourse. I found this:

.. and realized that this appears to be a serious security knowledge failure by my own bank (you!), in the great intention but beyond-horrible execution of attempted security.

Clean this stuff up.

You guys should also NEVER suggest to an e-mail recipient that they simply click on a link to validate a transaction. That is exactly what scammers do. Instead, instruct your customers to type in the URL to and access the Message Center.

Please don’t scare me like this anymore. Clean up your act, Chase! Or you'll be losing me as a customer.


PS I'm blogging this complaint. This is not something that will just be tucked away in an "annoying feedback" file.


UPDATE: Yeah FYI they followed up within 24 hours with "please call our fraud department at XXX-XXX-XXXX". Typical form letter response from lazy or ignorant outsourced workers who refuse to act upon my request to forward my concerns as a complaint.

Currently rated 4.3 by 9 people

  • Currently 4.333333/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5



Dear Java Developers: Told Ya So!

by Jon Davis 13. August 2010 02:59

Well, I suppose I didn’t tell you so, I didn’t knock on every Java developer’s door and say it. But I did say it. To myself. In a soft mumble. LOL.

What I said is that Java is not any more “Open Source” than .NET is. Granted, a huge majority of the open source community such as the gobs and gobs of projects that have been hosted for like ever at are Java-focused. But Java as a platform itself has always been far from open source.

I couldn’t get that Java vs .NET fake movie trailer video. It would have made more sense if it was .NET vs. PHP, Python, or Ruby, but Java?

I could never understand where Java developers got the idea that Microsoft was on this dominating consumption spree to gobble up everything that’s innovative and sue everyone who used their platform, but yet Java was somehow free from all of this. No programming language has set a monetary-loss/gain precedent like Sun set against Microsoft with suing and winning over the language’s platform like Java has. It was this very behavior from Sun that had me rushing away from Java and into the arms of C# as soon as C# was announced.

It should come as no surprise, then, that Java’s owner would now sue again, once again over the use of the platform in a free, open source distribution platform (Android). You’re surprised? Really?

Truth be told, I was surprised, too, at first. I shed a tear for the Java community. (Almost literally.) I genuinely hate this sort of behavior, companies suing companies because what was thought of as an open and free platform was treated as open and free, and meanwhile the company suing has no interest in the “open and free” part and wants to make a buck off every use of the platform. I realize that on the surface it’s naturally every company’s best interest to let every activity be a profitable one. Microsoft giving IronRuby the cold shoulder (they no longer have any full-time maintainers of the project) is another example of this. But I hate all that, I really do. These companies—Microsoft, Oracle, Apple, and Google—really need to maintain strong rapport with their user base if they want to maintain long-term loyalty.

I sold my iPad and set down my iPhone to switch to Android as soon as Apple killed off support for cross-compiled applications. They lost my loyalty for that. I wonder how many other people are in the same boat as me. As for Oracle, bleh! I could never see “Oracle MySQL” or “Oracle Java” or “Oracle Solaris” being word phrases that really went together, they just didn’t work for me, like taking an orange square lego and forcing it on a play-dough ball, just squishy squishy, it doesn’t fit.

I’m sorry, Java, that you must suffer a rude, foul owner and maintainer. Perhaps another, truly open-sourced and open-licensed platform will come about that we can all agree on as being “good”. Perhaps Google’s Go running on Linux is a start. For once, I am seeing great value in the true meaning of “open source” and in the benefits of GPL and other open-source licenses. It’s really a shame that the Java platform didn’t enjoy such licenses up front, but the world will keep turning, life must go on, and we must keep coding, ideally without getting sued!

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5


Hate To Break It To You, But The Browser Is A Tier

by Jon Davis 13. August 2010 01:00

“3-tier architecture”. Those words sounded like hogwash to me as I sat in my cubicle 13 years ago, and they still sound the same to me now. The reason for this is mainly because I haven’t ever actually seen a 3-tier architecture. I have only seen 2-tier and n-tier architectures with a mish-mash of strange layers and pseudo-service components.

Before I say anything more, I want to get one thing out of the way. The rough difference between a layer and a tier is that a tier is the physical isolation of software components such that these components can be distributed over the network, for example, to other servers, whereas a layer is a logical grouping of software components such that they are organized in patterns that meet the particular domain of that level or depth of the application (i.e. UI versus business logic versus database, which might be managed via isolated projects or DLLs). If you disagree with what I just said, I suggest that you go Google it for yourself. Tonight I’m somewhat merging the two terms, speaking mainly of physical tiers but also noting that these tiers are logically grouped as layers for practical purposes in most computing situations.

There is a strange belief among some developers and their managers that web servers represent the UI tier (and layer) of a web application. I am not sure where this belief comes from, but I suspect it might have something to do with prior experience with less-mature web technologies where HTML was static and the browsing interface was more of a “dumb terminal” if you will. Lynx and IE2 and Netscape 2. These days, however, web sites can be, and often are—and in my opinion often should be—wholly performing all dynamic view-layer functions within the context of dynamic HTML and AJAX. In my opinion, a “good” web application should be built to execute entirely within the web browser, and the web server should only be used to access data and to perform proprietary business logic or proprietary business algorithms.

Consider for example a web application that consists of nothing but static HTML, Javascript, CSS, and image file resources, plus isolated REST/JSON web services, perhaps on some alternate host. Such a web app can actually be as powerful and as functional as almost any web application these days, including our own which I have to work with every day. In that scenario, the combination of HTML and Javascript act as a client-side workstation application—just like a Windows Forms application or a Visual Basic 6 app of the previous decade—that just happen to be integrating with server resources via web services, those web services being the “proprietary tier and layer for processing shared data”.

By the way, this actually not bad design, architecturally. I’ve heard a lot of whining and complaining from compiled-code developers over the years that Javascript is a crappy kiddie language and should never be used for application logic, but as one who has had more experience with scripting languages than every last one of these whiners as well as on-par experience with strongly typed languages, I am quite convinced that Javascript is an excellent and mature language, one that truly supports most of the fundamentals of OOP and one that is worth betting on for professional web development, particularly if modern browsers are targeted with ECMAScript 5. Unfortunately, it does lack a few things such as design-time/compile-time semantics validation (i.e. no puking at compile-time if you misspell an object’s member reference, as there is no compilation process). But you can certainly unit test against it, that’s why there are frameworks for Javascript like JSUnit, jsunittest, FireUnit, et al, not to mention one unit testing framework I wrote of my own in a previous job. Unfortunately, tooling is still inadequate. It’s difficult, for example, to follow deeply nested “class” declarations and the declarations of their deeply nested members, and it’s true that it’s a bit harder to unit test the view complete with integration tests with Javascript than, say, business objects with C#. It’s still doable, though.

So let’s take this tier breakdown a step further. Microsoft actually implemented HTTP access support in SQL Server since I believe version 2005 if not version 2000 or even 7, alongside the native XML support. It is a poor, insecure practice, but nonetheless fully supported, but you can actually expose this on the front end. Consider then that technically one could create a rich, dynamic web site that had consisted of nothing but static HTML, Javascript, CSS, and image files, and native HTTP access to a SQL Server via SQL Server’s HTTP endpoints. This idea is horrible, but I hope it proves a point. While you will need to either download the HTML+script (etc) files to your hard drive and run them locally, or else make them available via a static HTTP server, this is truly a 2-tier application—one tier in the browser, one tier in SQL Server. Don’t like the idea of downloading the files and running them locally? Take it a step further. HTML and script files stored as database column values within SQL Server, and access them via SQL Server HTTP support. It’s still complete with multi-user support and distributed computing potential (you can rig SQL Server to be in a farm). But it remains a 2-tier application, not 1-tier; the actual execution of CPU cycles to process the application are performed in two distinct places—within the web browser and within SQL Server.

As you tear down the moving parts of a web server, et al, something begins to happen. Everything begins to become blazingly fast. Why? The more end points you have in the life cycle of an application session, the more I/O that must occur, and unpackaging and repackaging, and potential fail points. The more layers you add, the greater the struggle to keep up with the performance requirements and maintenance tasks becomes. Consider the possibility of rich business logic support within SQL Server stored procedures. Imagine all your business logic tucked away in stored procedures. It’s doable, folks, and a number of companies hire application developers who are SQL Server developers first and C# or web developers as an afterthought, where C# code is only used for UI. (I know this because I was interviewed by one such company not too long ago.) Every development team has a completely different perspective of how application development should work. Usually these differences are due to the differences in the business domains. Sometimes, however, the differences are due to variations in management experience (i.e. cluelessness).

One of the applications I work on—I won’t say whether it’s my day job or a side project being co-developed with a friend, or whatever, but let’s just say that tonight as with most of this year since I heard about these circumstances they have had me flabbergasted—is a painful 5-layer, 4-tier architecture, for a small-to-medium sized application. The 5 layers consist of browser UI, server-side app logic, a DAL accessible via WCF web services, CRUD sprocs which are the only means to DB data, and then finally the DB data. (Plus some Windows services, each with multiple tiers.) It just grew this year to 4 tiers because of a belief by a “privacy committee” that the web server, being “accessed by the customer”, needed another layer and tier to sit between the web server and the database where other customers’ data is stored. The application literally could not be migrated to new servers and be given access to the application’s own database until it met this requirement.

I won’t get into how painful it was to inject another tier—the months one developer took just to get the DAL to be proxied via WCF to the isolated web services servers. Nor will I blabber on about the complaining by the leaders who seem to want to blame the developers for wasting time with a server upgrade when there were no performance improvements, while at the same time insisting that this switch to a multi-tier architecture was somehow a good thing for the application. Nor that they had the nerve to keep calling this “3-tier” when it is clearly at least four tiers.

Introducing tiers to a web application is immensely costly to the performance of an app. If n-tier architectures mean “scaling out”, let’s just put it this way: “Scaling out” a legacy application that wasn’t built to scale out is a computer science process that is extremely hard and has no business being mandated by people who are not experienced software developers at their core unless they are willing to let go of at least six to twelve months for a rewrite. The non-leader developers in this case only had the authority and permission to patch the existing codebase in order to meet the requirements—getting the data proxied. In order to correctly build an n-tier, multi-layered application, you have to design much of it as such from the ground up. Most architectures that work best written one way in a two or three tier configuration will not scale well at all in an n-tier setting. And this proves true in this case, as some features in the app I’m referring to now take minutes to perform what used to take a few seconds to perform, because the data must undergo an extra hop via unoptimized XML and the leadership refuses to allow for binary+TCP serialization.

This is my challenge to you, the reader, who for all I know is only myself, it is a challenge nonetheless. Create a rich web application consisting of nothing but static HTML and script. Mock some web services by using static XML or JS(ON) files. Create unit tests for all business and data points. At the end of it all, ask yourself, “Why do I need RoR/ASP.NET/PHP?” I’m sure that the answer will not be, “I don’t,” but the point of this exercise is to prove out the significance and self-dependence of the browser as a tier and the power of client-side application programming for the web. Doing this should actually help, too, with server-side unit testability. Web scripting languages and ASP.NET are prone to be painfully untestable because they really require a browser to render the results before any functionality can be monitored, but if all of the server-side operations have stripped out rendering logic then unit testing the business logic, which is all that’s left, is a cakewalk.

This exercise can also prove out the importance in the value (and pay scale) of strong front-end developers. As one who has sought hard to be strong on all the layers of web development (front-end, middle-tier, and DB), it disgusts me and makes me sick to my stomach when I hear a boss shrug off a resume for being strong on the front-end, or praise a resume for lacking front-end talent. (I am not speaking of my own resume, by the way; let’s just say that I am within earshot of occasional interviews for other teams.) The truth is that every layer and every tier is important and can have equal impact on the success of an application. Leaders need to learn and understand this, as well as the costs and frustrations of adding more of these tiers and layers to an existing architecture.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5


If you tried to contact me in the last few months ...

by Jon Davis 8. August 2010 19:49

If you or anyone you know ever tried to contact me in the last few months via the Contact link above, you will need to re-send as I never got it.

I just waded through a few hundred junk comments to delete them and a few tens of legit comments to approve them. I then realized I wasn't getting e-mailed about these comments anymore. It seems my SMTP configuration was incorrect here on this blog.

I fixed this yesterday, and someone just contacted me via the form today, so with that short of a time frame I'm worried I may have missed quite a few e-mails from my Contact form, hence the reason why I felt it necessary to post this blog post today. Again, if you attempted to contact me via my Contact form and I did not reply, please give it another try.



Think Like Apple On The Back-End Too, Please

by Jon Davis 1. August 2010 19:36

Two or three years ago I became a Mac user again. I bought a Mac Mini to complement my iPhone which I wanted to write software for. This plan went forward, up until I quit my job and I decided to focus on my core technical strengths which are in programming on the Windows platform with .NET Framework and Visual Studio. Since then, that Mac has just been sitting in my home office, running quietly as an always-available little machine that I can sit down in front of at any time and tinker with. I don’t, usually, not anymore, since I have no need. But there it is.

There are now some things that I miss about “living on” my Mac for those few months, and some things that I don’t. Overall, some of the longest-running strengths that the Mac has over Windows still ring true today, the most notable of which is this: The Mac strives to be simple and easy to use up front, and getting to advanced configuration settings requires as much as, or even more than, two levels of depth versus what Linux and Windows often impose on the outset, but once you learn the basics of where to go to get to the advanced configurations and properties it’s as simple as it was on Linux and Windows. But on the forefront, whatever it is, whatever application you are using, whatever service you are installing, the Mac workflow is so simple it’s practically silly.

With Microsoft software, prioritization for simplicity is strictly based on how stupid the user is supposed to be. If the user is supposed to be a stupid consumer end user with absolutely no brain or thought power or an IQ of 72, well, Windows Live applications attempt (more or less) to demonstrate how to make powerful desktop software highly usable. But the further you get towards the enterprise environment, and the closer you get to the back end, the more frustrating the experience is going to be. Microsoft Office compared to Windows Live applications is just one tiny little step towards chaos. You have all these gobs and gobs of features and Microsoft has to figure out how to prioritize the features for the end user and make the most obvious features available up front while the lesser-used features are tucked away. Enter the Office ribbon, which we all know about.

But now lets go further back towards the back-end or low-end of software. A lot further back. Let’s look at BizTalk. Yeah. No wait, scratch BizTalk, that thing is such a pile of gobblygook with its own horrible reputation that I was never able to even look at it much less give discovering it a try. Let’s look at AppFabric.

So a few weeks ago my boss and I sat down to see if we could replace the ASP.NET Session State service with AppFabric. It was a flop. We couldn’t get it running. We didn’t try very long—maybe thirty minutes. But that was long enough to call it quits and declare it a failed install. Everything about this process was so incredibly representative of Microsoft thinking in terms of, “We don’t need no stinkin’ Apple-thinkin’ when we’re runnin’ apps for engineerin’.” Unlike Apple, who understands that both simple apps and advanced services must be consumed by human beings, Microsoft simply cannot handle such concepts as obvious defaults and ugly-free self-configuration (where ugly-free means changing the self-configuration does not mean digging through gobs and gobs of nasty GUIDified keys in the Windows registry).

Failsauce #1: The Download.

(Good lord that’s one ugly URL. Here: Seriously, Microsoft, quit exposing ugly URLs to us like that, at least try.)

Microsoft created separate installers for different environments. If you have Windows Server 2008, you need to download a completely different installer than if you have Windows Server 2008 R2. That’s right. Microsoft could not possibly bundle an all-in-one package because that would mean an almost 100MB download instead of an almost 50MB download. I’m not sure where the fail point is here, that separate binaries were created for these environments (really why not just use logic in code to self-configure for the environment?), that the marketing team gave “R2” its ridiculous name when it is an incompatible OS, or that there was no direct downloader installer app to download the correct binaries. To be fair, on the latter note, Microsoft does offer the “Web Platform Installer,” but co-workers at the office seem to get reaming mad when they have to download a generic multi-use installer since it exposes certain undesirable security and/or stability risks to the production server environment (i.e. rogue installations of “extra features”, etc).

Failsauce #2: Having prerequisite user actions in advance.

Seriously, don’t download and install AppFabric until you’ve read the Getting Started guide because there are things that you must do before you install that will hurt you if you don’t. And don’t plan on the Getting Started guide to be adequate for your preparations because you need to understand the product before you can make decisions about your preconfiguration. But don’t plan on reading up on it without having it working in front of you as you learn so you can experiment and discover how the features are used. But don’t plan on having it working in front of you so that you can learn because you have to have it installed first, and you can’t get it installed if it’s not preconfigured. Yeah. This is where we get the BSOD with the message “infinite user logic loop at preconfiguration 0x27591772”.

The prerequisite actions that must be made relate to setting up a Windows share, creating Active Directory accounts and giving rights to the share, and so on and so forth. Seriously, this is stuff that should be checkbox-enabled to be auto-configured by the installer, NOT pre-configured by the user!

We waded through the installation wizard and dealt with these preconfiguration details as we hit each roadblock in the wizard. This was our learning process, and it’s pretty conventional, folks. But unfortunately the wizard seemed to get somewhat unstable as it kept failing—the wizard didn’t crash, but the results seemed to be somehow corrupt.

Failsauce #3: Manual client configuration

After we spent at least five or ten minutes trying to set up a share to get past various steps in the installation wizard and dealt with some weirdness in functionality where errors occurred because we hadn’t set up the ACL for the directory, etc., we then looked at our app to try to figure out how to point to it for management of ASP.NET session state. We didn’t know what to do to make this happen—surely a web.config entry needed to be made in our app? We spent some time digging around at Microsoft’s web site to find where a web.config entry is described to enable AppFabric as the session state service. OK, here’s the fail point: If you’re running a wizard to set up a distributed session state service, then you automatically know that there will be consumers of this service, so put consuming app configuration in the wizard!! Providing a wizard for the service but not providing a wizard for the consumer is not only half-baked, it’s also back asswards, because if you’re going to have a ridiculous standard of having crappier user interfaces the further to the back-end you go then the back-end in this case is the service, not the client (the consuming web app), so if you’re going to “wizardize” this stuff then don’t stop with the service, have a wizard also update the web.config of the consuming ASP.NET app too as well as drop in any required binaries if required.

Failsauce #4: Broken GAC registrations and broken self-configured initialization

Even after we tracked down the documentation from Microsoft’s web site, it still didn’t work. Now it’s possible that the documentation has since changed (now it’s at ..

..) since we started on this just a short time (hours? days?) after v1.0’s release, but at the time our web app just plain couldn’t find the AppFabric assemblies described in the web.config. I believe the documentation lacked the proper strong-name description for configuring the app sections, or something similar, the assembly name was just the namespace and class name, and even that didn’t seem to match, perhaps the documentation was really, really stale at the time. So I proposed that we find the GAC-referenced binaries from Program Files and copy them into the bin directory, perhaps that might suffice.

Yeah, sure enough, the namespaces were wrong and stale, or somesuch. We waded through C:\Windows\assembly, scoured for DLLs in the Common Files directory in C:\Program Files, etc. That took us a long time since we weren’t sure what assembly name replaced the stale assembly name in the stale documentation, but long story short it ultimately did not work and after at least ten more minutes or so we just decided at this point to give up.

That evening my Twitter feed got updated a couple posts with #AppFabricSucks hashtag. That’s how this whole crappy experience made me feel.

What should have happened

Here’s what I think should have happened.

  1. Archaic configuration-oriented design. AppFabric should be an election-based distributed service. It should broadcast a request to find a controller over the subnet via UDP, with a callback URI readily available to handle “Im ur god pray to me” responses. Multiple deployments of the same service should be able to auto-negotiate an authority controller via automated election (yes, election .. as in, self-nominees, and votes). The controller should receive and redistribute data writes. If the controller dies suddenly, timeouts occur, which should result in re-performing the auto-negotiation/election process again. Likewise, if the controller is gracefully shut down, the auto-negotiation/election process occurs again.
  2. Installation should have offered auto-deployment of Windows shares, automatically set up the ACL, and even offer up a GUI for setting up a web app to use AppFabric for session state. The details of all of these should have been user-customizable but optionally so.
  3. Documentation, where required, should have been synchronized with the binaries’ deployment and not be stale at launch.
  4. Ultimately, the whole thing should be very plug-and-play.
  5. First impressions last a very, very, very long time especially for something new. Don’t toss something over the wall and say “here it is” when it’s 1.0. If you’re doing something new like this, be a Steve Jobs-esque jerk, make absolutely certain that every little detail is perfect and leave no new user experience left to feel like scrambled eggs and dirt.

This isn’t about AppFabric

This is about software. We have too much of it. We don’t have time to deal with troubleshooting another “advanced suite of services” for a web farm. We are already overwhelmed with understanding WCF, IIS 7, changes in ASP.NET Web Forms 4.0, and so on and so forth, on top of everyday concerns like our own business logic in C# and our T-SQL stored procedures. I love the modularity of IIS 7/7.5 but I’ve noticed that I’ve been getting annoyed now instead of excited when I see all the downloadable feature options at I just don’t have time to keep up with them all. So if I need to use one of these, it better be able to run itself because I don’t have time to deal with another project of learning a whole new module.

I don’t have the hours it would seem to take to set up what should be a minor component of a web farm, particularly considering the component we’re replacing (in this case ASP.NET Session State service) had only a couple minutes of setup (Automatic startup, and set AllowRemoteAccess=1 or whatever it’s called in the registry) because the older flavor was bundled and pre-deployed with ASP.NET / .NET Framework itself.

Not only that but if first impressions at installation, setup, and startup fail me, then I’ll automatically have difficulty trusting the quality of the software. It’s in our nature as software engineers to find it more difficult to trust software that breaks when we touch it; of course, the culprit is almost always a PEBKAC error, but that’s all the more reason why any significant software application or service should be as foolproof at its point of setup as it is stable and accurate at runtime.

Another case in point, last I checked (about five years ago), Microsoft SQL Server was dancing in circles around Oracle DB. Oracle had features that Microsoft SQL Server could only dream about, but Oracle’s setup hell and disgusting administrative tooling left such a rancid taste in one’s mouth that it’s no wonder that it’s only used in cold, dry environments where human beings have relinquished their humanity.

Why I’m ranting on this again

I’m writing this blog post because today I’ve been finding myself doing it yet again. I’ve been pondering and planning to write yet another alternative to Microsoft’s offering, in order to get that bad first impression taste out of my mouth and replace it with something I can trust. But now we’re talking about AppFabric. Fricken AppFabric. And after compiling my notes, I’ve come to realize that I’d be insane if I tried to recreate even a tiny subset of this in a few sit-down sessions. (Maybe I am insane, insane enough even to keep considering doing it, a number of people think I’m a lunatic anyway.) But I’m not happy with being in this boat. Frankly, I’m angry. I’m angry because this is not anywhere close to being the first time a highly marketed Microsoft back-end technology was such an unnecessarily difficult pain to get started. I’m angry because it’s not just Microsoft—when I worked at Sage Software for the Saleslogix product, the web technology was a crazy multi-installer, multiuser, multi-ACL, multi-reboot nightmare. None of these software companies have any excuse. Why can’t Microsoft just put back-end stuff out that works in a foolproof manner? Why is that so hard for them? Why am I forced to be stuck in that boat that the Linux people put me in, where I am the loser because I couldn’t install what should have been a simple service. It’s just so insulting. But I blame it all on Microsoft, and I hate tolerating this crap.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5



Powered by BlogEngine.NET
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 

Tag cloud


<<  May 2021  >>

View posts in large calendar