Technology Status Update 2016

by Jon Davis 10. July 2016 09:09

Hello, peephole. (people.) Just a little update. I've been keeping this blog online for some time, my most recent blog entries always so negative, I keep having to see that negativity every time I check to make sure the blog's up, lol. I'm tired of it so I thought I'd post something positive.

My current job is one I hope to keep for years and years to come, and if that doesn't work out I'll be looking for one just like it and try to keep it for years and years to come. I'm so done with contracting and consulting (except the occasional mentoring session on code mentor -dot- io). I'm still developing, of course, and as technology is changing, here's what's up as I see it. 

  1. Azure is relevant. 
    image
    The world really has shifted to cloud and the majority of companies, finally, are offloading their hosting to the cloud. AWS, Azure, take your pick, everyone who hates Microsoft will obviously choose AWS but Azure is the obvious choice for Microsoft stack folks, there is nothing meaningful AWS has that Azure doesn't at this point. The amount of stuff on Azure is sufficiently terrifying in quantity and supposed quality enough to give me a thrill. So I'm done with hating on Azure, after all their marketing and nagging and pushing, Microsoft has crossed a threshold of market saturation that I am adequately impressed. I guess that means I have to be an Azure fan, too, now. Fine. Yay Azure, woo. -.-
  2. ASP.NET is officially rebooted. 
    image
    So I hear this thing called ASP.NET Core 1.0 formerly known as ASP.NET 5 formerly known as ASP.NET vNext has RTM'd, and I hear it's like super duper important. It snuck by me, I haven't mastered it, but I know it enought to know a few things:
    • It's a total redux by means of redo. It's like the Star Trek reboot except it’s smaller and there are fewer planets it can manage, but it’s exactly like the Star Trek reboot in that it will probably implode yours.
    • If you've built your career on ASP.NET and you want to continue living on ASP.NET's laurals, now is not the time to master ASP.NET 1.0 Core. Give it another year or two to mature. 
    • If you're stuck on or otherwise fascinated by non-Microsoft operating systems, namely Mac and Linux, but you want to use the Microsoft programming stack, you absolutely must learn and master ASP.NET Core 1.0 and EF7.
    • If all you liked from ASP.NET Core 1.0 was the dynamic configs and build-time transpiles, you don't need ASP.NET Core for that LOL LOL ROFLMAO LOL LOL LOL *cough*
  3. The Aurelia Javascript framework is nearly ready.
    image
    Overall, Javascript framework trends have stopped. Companies are building upon AngularJS 1.x. Everyone who’s behind is talking about React as if it was new and suddenly newly relevant (it isn’t new anymore). Everyone still implementing Knockout are out of the loop and will die off soon enough. jQuery is still ubiquitous and yet ignored as a thing, but meanwhile it just turned v3.

    I don’t know what to think about things anymore. Angular 2.0 requires TypeScript, people hate TypeScript because they hate transpilers. People are still comparing TypeScript with CoffeeScript. People are dumb. If it wasn’t for people I might like Angular 2.0, and for that matter I’d be all over AureliaJS, which is much nicer but just doesn’t have Google as the titanic marketing arm. In the end, let’s just get stuff done, guys. Build stuff. Don’t worry about frameworks. Learn them all as you need them.
  4. Node.js is fading and yet slowly growing in relevance.
    image
    Do you remember .. oh heck unless you're graying probably not, anyway .. do you remember back in the day when the dynamic Internet was first loosed on the public and C/C++ and Perl were used to execute from cgi-bin, and if you wanted to add dynamic stuff to a web site you had to learn Perl and maybe find Perl pearls and plop them into your own cgi-bin? Yeah, no, I never really learned Perl, either, but I did notice the trend, but in the end, what did C/C++ and Perl mean to us up until the last decade? Answer: ubiquitous availability, but not web server functionality, just an ever-present availability for scripts, utilities, hacks, and whatever. That is where node.js is headed. Node.js for anything web related has become and will continue to be a gigantic mess of disorganized, everyone-is-equal, noisily integrated modules that sort of work but will never be as stable in built compositions as more carefully organized platforms. Frankly, I see node.js being more relevant as a workstation runtime than a server runtime. Right now I'm looking at maybe poking at it in a TFS build environment, but not so much for hosting things.
    I will always have a bitter taste in my mouth with node.js after trying to get socket.io integrated with Express and watching the whole thing just crumble, with no documentation or community help to resolve it, and this happened not just once on the job (never resolved before I walked away) but also during a code-mentor mentoring session (which we didn't figure out), even after a good year or so of maturity of the platform after the first instance. I still like node.js but will no longer be trying to build a career on it.
  5. Pay close attention and learn up on Swagger aka OpenAPI. 
    image
    Remember when -- oh wait, no, unless you're graying, .. nevermind .. anyway, -- once upon a time something called SOAP came out and it came with it a self-documentation feature that was a combination of WSDL and some really handy HTML generated scaffolding built into web services that would let you manually test SOAP-based services by filling out a self-generated form. Well now that JSON-based REST is the entirety of the playing field, we need the same self-documention. That's where Swagger came in a couple years ago and everyone uses it now. Swagger needs some serious overhauling--someone needs to come up with a Swagger-compliant UI built on more modular and configurable components, for example--but as a drop-in self-documentation feature for REST services it fits the bill.
    • Swagger can be had on .NET using a lib called Swashbuckle. If you use OData, there is a lib called Swashbuckle.OData. We use it very, very heavily where I work. (I was the one who found it and brought it in.) "Make sure it shows up and works in Swagger" is a requirement for all REST/OData APIs we build now.
    • Swagger is now OpenAPI but it's still Swagger, there are not yet any OpenAPI artifacts that I know of other than Swagger. Which is lame. Swagger is ugly. Featureful, but ugly, and non-modular.
    • Microsoft is listed as a contributing member of the OpenAPI committee, but I don't know what that means, and I don't see any generic output from OpenAPI yet. I'm worried that Microsoft will build a black box (rather than white box) Swagger-compliant alternative for ASP.NET Core.
    • Other curious ones to pay attention to, but which I don't see as significantly supported by the .NET community yet (maybe I haven't looked hard enough), are:
  6. OData v4 has potential but is implementation-heavy and sorely needs a v5. 
    image
    A lot of investments have been made in OData v4 as a web-based facade to Entity Framework data resources. It's the foundation of everything the team I'm with is working on, and I've learned to hate it. LOL. But I see its potential. I hope investments continue because it is sorely missing fundamental features like
    • MS OData needs better navigation property filtering and security checking, whether by optionally redirecting navigation properties to EDM-mapped controller routes (yes, taking a performance hit) or some other means
    • MS OData '/$count' breaks when [ODataRoute] is declared, boo.
    • OData spec sorely needs "DISTINCT" feature
    • $select needs to be smarter about returning anonymous models and not just eliminating fields; if all you want is one field in a nested navigation property in a nested navigation property (equivalent of LINQ's .Select(x=>new {ID=x.ID, DesiredField2=x.Child.Child2.DesiredField2}), in the OData result set you will have to dive into an array and then into an array to find the one desired field
    • MS OData output serialization is very slow and CPU-heavy
    • Custom actions and functions and making them exposed to Swagger via Swashbuckle.OData make me want to pull my hair out, it takes sometimes two hours of screaming and choking people to set up a route in OData where it would take me two minutes in Web API, and in the end I end up with a weird namespaced function name in the route like /OData/Widgets/Acme.GetCompositeThingmajig(4), there's no getting away from even the default namespace and your EDM definition must be an EXACT match to what is clearly obviously spelled out in the C# controller implementation or you die. I mean, if Swashbuckle / Swashbuckle.OData can mostly figure most of it out without making us dress up in a weird Halloween costume, surely Microsoft's EDM generator should have been able to.
  7. "Simple CRUD apps" vs "messaging-oriented DDD apps"
    has become the new Microsoft vs Linux or C# vs Java or SQL vs NoSQL. 

    imageThe war is really ugly. Over the last two or three years people have really been talking about how microservices and reaction-oriented software have turned the software industry upside down. Those who hop on the bandwagon are neglecting to know when to choose simpler tooling chains for simple jobs, meanwhile those who refuse to jump on the bandwagon are using some really harsh, cruel words to describe the trend ("idiots", "morons", etc). We need to learn to love and embrace all of these forms of software, allow them to grow us up, and know when to choose which pattern for which job.
    • Simple CRUD apps can still accomplish most business needs, making them preferable most of the time
      • .. but they don't scale well
      • .. and they require relatively very little development knowledge to build and grow
    • Non-transactional message-oriented solutions and related patterns like CQRS-ES scale out well but scale developers' and testers' comprehension very poorly; they have an exponential scale of complexity footprint, but for the thrill seekers they can be, frankly, hella fun and interesting so long as they are not built upon ancient ESB systems like SAP and so long as people can integrate in software planning war rooms.
    • Disparate data sourcing as with DDD with partial data replication is a DBA's nightmare. DBAs will always hate it, their opinions will always be biased, and they will always be right in their minds that it is wrong and foolish to go that route. They will sometimes be completely correct.

  8. Integrated functional unit tests are more valuable than TDD-style purist unit tests. That’s my new conclusion about developer testing in 2016. Purist TDD mindset still has a role in the software developer’s life. But there is still value in automated integration tests, and when things like Entity Framework are heavily in play, apparently it’s better to build upon LocalDB automation than Moq.
    At least, that’s what my current employer has forced me to believe. Sadly, the purist TDD mindset that I tried to adopt and bring to the table was not even slightly appreciated. I don’t know if I’m going to burn in hell for being persuaded out of a purist unit testing mindset or not. We shall see, we shall see.
  9. I'm hearing some weird and creepy rumors I don't think I like about SQL Server moving to Linux and eventually getting itself renamed. I don't like it, I think it's unnecessary. Microsoft should just create another product. Let SQL Server be SQL Server for Windows forever. Careers are built on such things. Bad Microsoft! Windows 8, .NET Framework version name fiascos, Lync vs Skype for Business, when will you ever learn to stop breaking marketing details to fix what is already successful??!
  10. Speaking of SQL Server, SQL Server 2016 is RTM'd, and full blown SSMS 2016 is free.
  11. On-premises TFS 2015 only just recently acquired gated check-in build support in a recent update. Seriously, like, what the heck, Microsoft? It's also super buggy, you get a nasty error message in Visual Studio while monitoring its progress. This is laughable.
    • Clear message from Microsoft: "If you want a premium TFS experience, Azure / Visual Studio Online is where you have to go." Microsoft is no longer a shrink-wrapped product company, they sell shrink wrapped software only for the legacy folks as an afterthought. They are hosted platform company now all the way. .
      • This means that Windows 10 machines including Nokia devices are moving to be subscription boxes with dumb client deployments. Boo.
  12. imageAnother rumor I've heard is that
    Microsoft is going to abandon the game industry.

    The Xbox platform was awesome because Microsoft was all in. But they're not all in anymore, and it shows, and so now as they look at their lackluster profits, what did they expect?
    • Microsoft: Either stay all-in with Xbox and also Windows 10 (dudes, have you seen Steam's Big Picture mode? no excuse!) or say goodbye to the consumer market forever. Seriously. Because we who thrive on the Microsoft platform are also gamers. I would recommend knocking yourselves over to partner with Valve to co-own the whole entertainment world like the oligarchies that both of you are since Valve did so well at keeping the Windows PC relevant to the gaming markets.

For the most part I've probably lived under a rock, I'm sure, I've been too busy enjoying my new 2016 Subaru WRX (a 4-door racecar) which I am probably going to sell in the next year because I didn't get out of debt first, but not before getting a Kawasaki Vulcan S ABS Café as my first motorized two-wheeler, riding that between playing Steam games, going camping, and exploring other ways to appreciate being alive on this planet. Maybe someday I'll learn to help the homeless and unfed, as I should. BTW, in the end I happen to know that "love God and love people" are the only two things that matter in life. The rest is fluff. But I'm so selfish, man do I enjoy fluff.  I feel like such a jerk. Those who know me know that I am one. God help me.

image2016-Kawasaki-Vulcan-S-ABS-Cafe1 
image  image
Top row: Fluff that doesn’t matter and distracts me from matters of substance.
Bottom row: Matters of substance.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

ASP.NET | Blog | Career | Computers and Internet | Cool Tools | LINQ | Microsoft Windows | Open Source | Software Development | Web Development | Windows

Announcing Fast Koala, an alternative to Slow Cheetah

by Jon Davis 17. July 2015 19:47

So this is a quick FYI for teh blogrollz that I have recently been working on a little Visual Studio extension that will do for web apps what Slow Cheetah refused to do. It enables build-time transformations for both web apps and for Windows apps and classlibs. 

Here's the extension: https://visualstudiogallery.msdn.microsoft.com/7bc82ddf-e51b-4bb4-942f-d76526a922a0  

Here's the Github: https://github.com/stimpy77/FastKoala

Either link will explain more.

Introducing XIO (xio.js)

by Jon Davis 3. September 2013 02:36

I spent the latter portion last week and the bulk of the holiday fleshing out the initial prototype of XIO ("ecks-eye-oh" or "zee-oh", I don't care at this point). It was intended to start out as an I/O library targeting everything (get it? X I/O, as in I/O for x), but that in turn forced me to make it a repository library with RESTful semantics. I still want to add stream-oriented functionality (WebSocket / long polling) to it to make it truly an I/O library. In the mean time, I hope people can find it useful as a consolidated interface library for storing and retrieving data.

You can access this project here: https://github.com/stimpy77/xio.js#readme

Here's a snapshot of the README file as it was at the time of this blog entry.



XIO (xio.js)

version 0.1.1 initial prototype (all 36-or-so tests pass)

A consistent data repository strategy for local and remote resources.

What it does

xio.js is a Javascript resource that supports reading and writing data to/from local data stores and remote servers using a consistent interface convention. One can write code that can be more easily migrated between storage locations and/or URIs, and repository operations are simplified into a simple set of verbs.

To write and read to and from local storage,

xio.set.local("mykey", "myvalue");
var value = xio.get.local("mykey")();

To write and read to and from a session cookie,

xio.set.cookie("mykey", "myvalue");
var value = xio.get.cookie("mykey")();

To write and read to and from a web service (as optionally synchronous; see below),

xio.post.mywebservice("mykey", "myvalue");
var value = xio.get.mywebservice("mykey")();

See the pattern? It supports localStorage, sessionStorage, cookies, and RESTful AJAX calls, using the same interface and conventions.

It also supports generating XHR functions and providing implementations that look like:

mywebservice.post("mykey", "myvalue");
var value = mywebservice.get("mykey")(); // assumes synchronous; see below
Optionally synchronous (asynchronous by default)

Whether you're working with localStorage or an XHR resource, each operation returns a promise.

When the action is synchronous, such as in working with localStorage, it returns a "synchronous promise" which is essentially a function that can optionally be immediately invoked and it will wrap .success(value) and return the value. This also works with XHR when async: false is passed in with the options during setup (define(..)).

The examples below are the same, only because XIO knows that the localStorage implementation of get is synchronous.

Aynchronous convention: var val; xio.get.local('mykey').success(function(v) { val = v; });

Synchronous convention: var val = xio.get.local('mykey')();

Generated operation interfaces

Whenever a new repository is defined using XIO, a set of supported verb and their implemented functions is returned and can be used as a repository object. For example:

var myRepository = xio.define('myRepository', { 
    url: '/myRepository?key={0}',
    methods: ["GET", "POST", "PUT", "DELETE"]
});

.. would populate the variable myRepository with:

{
    get: function(key) { /* .. */ },
    post: function(key, value) { /* .. */ },
    put: function(key, value) { /* .. */ },
    delete: function(key) { /* .. */ }
}

.. and each of these would return a promise.

XIO's alternative convention

But the built-in convention is a bit unique using xio[action][repository](key, value) (i.e.xio.post.myRepository("mykey", {first: "Bob", last: "Bison"}), which, again, returns a promise.

This syntactical convention, with the verb preceding the repository, is different from the usual convention of_object.method(key, value).

Why?!

The primary reason was to be able to isolate the repository from the operation, so that one could theoretically swap out one repository for another with minimal or no changes to CRUD code. For example,

var repository = "local"; // use localStorage for now; 
                          // replace with "my_restful_service" when ready 
                          // to integrate with the server
xio.post[repository](key, value).complete(function() {

    xio.get[repository](key).success(function(val) {
        console.log(val);
    });

});

Note here how "repository" is something that can move around. The goal, therefore, is to make disparate repositories such as localStorage and RESTful web service targets support the same features using the same interface.

As a bit of an experiment, this convention of xio[verb][repository] also seems to read and write a little better, even if it's a bit weird at first to see. The thinking is similar to the verb-target convention in PowerShell. Rather than taking a repository and working with it independently with assertions that it will have some CRUD operations available, the perspective is flipped and you are focusing on what you need to do, the verbs, first, while the target becomes more like a parameter or a known implementation of that operation. The goal is to dumb down CRUD operation concepts and repositories and refocus on the operations themselves so that, rather than repositories having an unknown set of operations with unknown interface styles and other features, instead, your standard CRUD operations, which are predictable, have a set of valid repository targets that support those operations.

This approach would have been entirely unnecessary and pointless if Javascript inherently supported interfaces, because then we could just define a CRUD interface and write all our repositories against those CRUD operations. But it doesn't, and indeed with the convention of closures and modules, it really can't.

Meanwhile, when you define a repository with xio.define(), as was described above and detailed again below, it returns an object that contains the operations (get(), post(), etc) that it supports. So if you really want to use the conventional repository[method](key, value) approach, you still can!

Download

Download here: https://raw.github.com/stimpy77/xio.js/master/src/xio.js

To use the whole package (by cloning this repository)

.. and to run the Jasmine tests, you will need Visual Studio 2012 and a registration of the .json file type with IIS / IIS Express MIME types. Open the xio.js.csproj file.

Dependencies

jQuery is required for now, for XHR-based operations, so it's not quite ready for node.js. This dependency requirement might be dropped in the future.

Basic verbs

See xio.verbs:

  • get(key)
  • set(key, value); used only by localStorage, sessionStorage, and cookie
  • put(key, data); defaults to "set" behavior when using localStorage, sessionStorage, or cookie
  • post(key, data); defaults to "set" behavior when using localStorage, sessionStorage, or cookie
  • delete(key)
  • patch(key, patchdata); implemented based on JSON/Javascript literals field sets (send only deltas)
Examples
// initialize

var xio = Xio(); // initialize a module instance named "xio"
localStorage
xio.set.local("my_key", "my_value");
var val = xio.get.local("my_key")();
xio.delete.local("my_key");

// or, get using asynchronous conventions, ..    
var val;
xio.get.local("my_key").success(function(v) 
    val = v;
});

xio.set.local("my_key", {
    first: "Bob",
    last: "Jones"
}).complete(function() {
    xio.patch.local("my_key", {
        last: "Jonas" // keep first name
    });
});
sessionStorage
xio.set.session("my_key", "my_value");
var val = xio.get.session("my_key")();
xio.delete.session("my_key");
cookie
xio.set.cookie(...)

.. supports these arguments: (key, value, expires, path, domain)

Alternatively, retaining only the xio.set["cookie"](key, value), you can automatically returned helper replacer functions:

xio.set["cookie"](skey, svalue)
    .expires(Date.now() + 30 * 24 * 60 * 60000))
    .path("/")
    .domain("mysite.com");

Note that using this approach, while more expressive and potentially more convertible to other CRUD targets, also results in each helper function deleting the previous value to set the value with the new adjustment.

session cookie
xio.set.cookie("my_key", "my_value");
var val = xio.get.cookie("my_key")();
xio.delete.cookie("my_key");
persistent cookie
xio.set.cookie("my_key", "my_value", new Date(Date.now() + 30 * 24 * 60 * 60000));
var val = xio.get.cookie("my_key")();
xio.delete.cookie("my_key");
web server resource (basics)
var define_result =
    xio.define("basic_sample", {
                url: "my/url/{0}/{1}",
                methods: [ xio.verbs.get, xio.verbs.post, xio.verbs.put, xio.verbs.delete ],
                dataType: 'json',
                async: false
            });
var promise = xio.get.basic_sample([4,12]).success(function(result) {
   // ..
});
// alternatively ..
var promise_ = define_result.get([4,12]).success(function(result) {
   // ..
});

The define() function creates a verb handler or route.

The url property is an expression that is formatted with the key parameter of any XHR-based CRUD operation. The key parameter can be a string (or number) or an array of strings (or numbers, which are convertible to strings). This value will be applied to the url property using the same convention as the typical string formatters in other languages such as C#'s string.Format().

Where the methods property is defined as an array of "GET", "POST", etc, for each one mapping to standard XIO verbs an XHR route will be internally created on behalf of the rest of the options defined in the options object that is passed in as a parameter to define(). The return value of define() is an object that lists all of the various operations that were wrapped for XIO (i.e. get(), post(), etc).

The rest of the options are used, for now, as a jQuery's $.ajax(..., options) parameter. The async property defaults to false. When async is true, the returned promise is wrapped with a "synchronous promise", which you can optionally immediately invoke with parens (()) which will return the value that is normally passed into .success(function (value) { .. }.

In the above example, define_result is an object that looks like this:

{
    get: function(key) { /* .. */ },
    post: function(key, value) { /* .. */ },
    put: function(key, value) { /* .. */ },
    delete: function(key) { /* .. */ }
}

In fact,

define_result.get === xio.get.basic_sample

.. should evaluate to true.

Sample 2:

var ops = xio.define("basic_sample2", {
                get: function(key) { return "value"; },
                post: function(key,value) { return "ok"; }
            });
var promise = xio.get["basic_sample2"]("mykey").success(function(result) {
   // ..
});

In this example, the get() and post() operations are explicitly declared into the defined verb handler and wrapped with a promise, rather than internally wrapped into XHR/AJAX calls. If an explicit definition returns a promise (i.e. an object with .success and .complete), the returned promise will not be wrapped. You can mix-and-match both generated XHR calls (with the url and methods properties) as well as custom implementations (with explicit get/post/etc properties) in the options argument. Custom implementations will override any generated implementations if they conflict.

web server resource (asynchronous GET)
xio.define("specresource", {
                url: "spec/res/{0}",
                methods: [xio.verbs.get],
                dataType: 'json'
            });
var val;
xio.get.specresource("myResourceAction").success(function(v) { // gets http://host_server/spec/res/myResourceAction
    val = v;
}).complete(function() {
    // continue processing with populated val
});
web server resource (synchronous GET)
xio.define("synchronous_specresources", {
                url: "spec/res/{0}",
                methods: [xio.verbs.get],
                dataType: 'json',
                async: false // <<==!!!!!
            });
var val = xio.get.synchronous_specresources("myResourceAction")(); // gets http://host_server/spec/res/myResourceAction
web server resource POST
xio.define("contactsvc", {
                url: "svcapi/contact/{0}",
                methods: [ xio.verbs.get, xio.verbs.post ],
                dataType: 'json'
            });
var myModel = {
    first: "Fred",
    last: "Flinstone"
}
var val = xio.post.contactsvc(null, myModel).success(function(id) { // posts to http://host_server/svcapi/contact/
    // model has been posted, new ID returned
    // validate:
    xio.get.contactsvc(id).success(function(contact) {  // gets from http://host_server/svcapi/contact/{id}
        expect(contact.first).toBe("Fred");
    });
});
web server resource (DELETE)
xio.delete.myresourceContainer("myresource");
web server resource (PUT)
xio.define("contactsvc", {
                url: "svcapi/contact/{0}",
                methods: [ xio.verbs.get, xio.verbs.post, xio.verbs.put ],
                dataType: 'json'
            });
var myModel = {
    first: "Fred",
    last: "Flinstone"
}
var val = xio.post.contactsvc(null, myModel).success(function(id) { // posts to http://host_server/svcapi/contact/
    // model has been posted, new ID returned
    // now modify:
    myModel = {
        first: "Carl",
        last: "Zeuss"
    }
    xio.put.contactsvc(id, myModel).success(function() {  /* .. */ }).error(function() { /* .. */ });
});
web server resource (PATCH)
xio.define("contactsvc", {
                url: "svcapi/contact/{0}",
                methods: [ xio.verbs.get, xio.verbs.post, xio.verbs.patch ],
                dataType: 'json'
            });
var myModel = {
    first: "Fred",
    last: "Flinstone"
}
var val = xio.post.contactsvc(null, myModel).success(function(id) { // posts to http://host_server/svcapi/contact/
    // model has been posted, new ID returned
    // now modify:
    var myModification = {
        first: "Phil" // leave the last name intact
    }
    xio.patch.contactsvc(id, myModification).success(function() {  /* .. */ }).error(function() { /* .. */ });
});
custom implementation and redefinition
xio.define("custom1", {
    get: function(key) { return "teh value for " + key};
});
xio.get.custom1("tehkey").success(function(v) { alert(v); } ); // alerts "teh value for tehkey";
xio.redefine("custom1", xio.verbs.get, function(key) { return "teh better value for " + key; });
xio.get.custom1("tehkey").success(function(v) { alert(v); } ); // alerts "teh better value for tehkey"
var custom1 = 
    xio.redefine("custom1", {
        url: "customurl/{0}",
        methods: [xio.verbs.post],
        get: function(key) { return "custom getter still"; }
    });
xio.post.custom1("tehkey", "val"); // asynchronously posts to URL http://host_server/customurl/tehkey
xio.get.custom1("tehkey").success(function(v) { alert(v); } ); // alerts "custom getter still"

// oh by the way,
for (var p in custom1) {
    if (custom1.hasOwnProperty(p) && typeof(custom1[p]) == "function") {
        console.log("custom1." + p); // should emit custom1.get and custom1.post
    }
}

Future intentions

WebSockets and WebRTC support

The original motivation to produce an I/O library was actually to implement a WebSockets client that can fallback to long polling, and that has no dependency upon jQuery. Instead, what has so far become implemented has been a standard AJAX interface that depends upon jQuery. Go figure.

If and when WebSocket support gets added, the next step will be WebRTC.

Meanwhile, jQuery needs to be replaced with something that works fine on nodejs.

Additionally, in a completely isolated parallel path, if no progress is made by the ASP.NET SignalR team to make the SignalR client freed from jQuery, xio.js might become tailored to be a somewhat code compatible client implementation or a support library for a separate SignalR client implementation.

Service Bus, Queuing, and background tasks support

At an extremely lightweight scale, I do want to implement some service bus and queue features. For remote service integration, this would just be more verbs to sit on top of the existing CRUD operations, as well as WebSockets / long polling / SignalR integration. This is all fairly vague right now because I am not sure yet what it will look like. On a local level, however, I am considering integrating with Web Workers. It might be nice to use XIO to manage deferred I/O via the Web Workers feature. There are major limitations to Web Workers, however, such as no access to the DOM, so I am not sure yet.

Other notes

If you run the Jasmine tests, make sure the .json file type is set up as a mime type. For example, IIS and IIS Express will return a 403 otherwise. Google reveals this: http://michaellhayden.blogspot.com/2012/07/add-json-mime-type-to-iis-express.html

License

The license for XIO is pending, as it's not as important to me as getting some initial feedback. It will definitely be an attribution-based license. If you use xio.js as-is, unchanged, with the comments at top, you definitely may use it for any project. I will drop in a license (probably Apache 2 or BSD or Creative Commons Attribution or somesuch) in the near future.

Canvas & HTML 5 Sample Junk

by Jon Davis 27. September 2012 15:48

Poking around with HTML 5 canvas again, refreshing my knowledge of the basics. Here's where I'm dumping links to my own tinkerings for my own reference. I'll update this with more list items later as I come up with them.

  1. Don't have a seizure. http://jsfiddle.net/8RYtu/22/
    HTML5 canvas arc, line, audio, custom web font rendered in canvas, non-fixed (dynamic) render loop with fps meter, window-scale, being obnoxious
  2. Pass-through pointer events http://jsfiddle.net/MtGT8/1/
    Demonstrates how the canvas element, which would normally intercept mouse events, does not do so here, and instead allows the mouse event to propagate to the elements behind it. Huge potential but does not work in Internet Explorer.
  3. Geolocation sample. http://jsfiddle.net/nmu3x/4/ 
    Nothing to do with canvas here. Get over it.
  4. ECMAScript 5 Javascript property getter/setter. http://jsfiddle.net/9QpnW/8/
    Like C#, Javascript now supports assigning functions to property getters/setters. See how I store a value privately (in a closure) and do bad by returning a modified value.

Esent: The Decade-Old Database Engine That Windows (Almost) Always Had

by Jon Davis 30. August 2010 03:08

Windows has a technology I just stumbled upon that should make a few *nix folks jealous. It’s called Esent. It’s been around since Windows 2000 and is still alive and strong in Windows 7.

What is Esent, you ask? It’s a database engine. No, not like SQL Server, it doesn’t do the T-SQL language. But it is an ISAM database, and it’s got several features of a top-notch database engine. According to this page,

ESENT is an embeddable, transactional database engine. It first shipped with Microsoft Windows 2000 and has been available for developers to use since then. You can use ESENT for applications that need reliable, high-performance, low-overhead storage of structured or semi-structured data. The ESENT engine can help with data needs ranging from something as simple as a hash table that is too large to store in memory to something more complex such as an application with tables, columns, and indexes.

Many teams at Microsoft—including The Active Directory, Windows Desktop Search, Windows Mail, Live Mesh, and Windows Update—currently rely on ESENT for data storage. And Microsoft Exchange stores all of its mailbox data (a large server typically has dozens of terrabytes of data) using a slightly modified version of the ESENT code.

Features
Significant technical features of ESENT include:

  • ACID transactions with savepoints, lazy commits, and robust crash recovery.
  • Snapshot isolation.
  • Record-level locking (multi-versioning provides non-blocking reads).
  • Highly concurrent database access.
  • Flexible meta-data (tens of thousands of columns, tables, and indexes are possible).
  • Indexing support for integer, floating point, ASCII, Unicode, and binary columns.
  • Sophisticated index types, including conditional, tuple, and multi-valued.
  • Columns that can be up to 2GB with a maximum database size of 16TB.

Note: The ESENT database file cannot be shared between multiple processes simultaneously. ESENT works best for applications with simple, predefined queries; if you have an application with complex, ad-hoc queries, a storage solution that provides a query layer will work better for you.

Wowza. I need 16TB databases, I use those all the time. LOL.

My path to stumbling upon Esent was first by looking at RavenDB, which is rumored to be built upon Esent as its storage engine. Searching for more info on Esent, I came across ManagedEsent, which provides a crazy-cool PersistentDictionary and exposes the native Esent with an API wrapper.

To be quite honest, the Jet-prefixed API points look to be far too low-level for my interests, but some of the helper classes are definitely a step in the right direction in making this API more C#-like.

I’m particularly fascinated, however, by the PersistentDictionary. It’s a really neat, simple way to persist ID’d serializable objects to the hard drive very efficiently. Unfortunately it is perhaps too simple; it does not do away with NoSQL services that provide rich document querying and indexing.

Looks like someone over there at Microsoft who plays with Esent development is blogging: http://blogs.msdn.com/b/laurionb/

Currently rated 4.1 by 8 people

  • Currently 4.125/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Microsoft Windows | Software Development

Four Methods Of Simple Caching In .NET

by Jon Davis 30. August 2010 01:07

Caching is a fundamental component of working software. The role of any cache is to decrease the performance footprint of data I/O by moving it as close as possible to the execution logic. At the lowest level of computing, a thread relies on a CPU cache to be loaded up each time the thread context is switched. At higher levels, caches are used to offload data from a database into a local application memory store or, say, a Lucene directory for fast indexing of data.

Although there are some decent hashtable-become-dictionary scenarios in .NET as it has evolved, until .NET 4.0 there was never a general purpose caching table built directly into .NET. By “caching table” I mean a caching mechanism where keyed items get put into it but eventually “expire” and get deleted, whether by:

a) a “sliding” expiration whereby the item expires in a timespan-determined amount of time from the time it gets added,

b) an explicit DateTime whereby the item expires as soon as that specific DateTime passes, or

c) the item is prioritized and is removed from the collection when the system needs to free up some memory.

There are some methods people can use, however.

Method One: ASP.NET Cache

ASP.NET, or the System.Web.dll assembly, does have a caching mechanism. It was never intended to be used outside of a web context, but it can be used outside of the web, and it does perform all of the above expiration behaviors in a hashtable of sorts.

After scouring Google, it appears that quite a few people who have discussed the built-in caching functionality in .NET have resorted to using the ASP.NET cache in their non-web projects. This is no longer the most-available, most-supported built-in caching system in .NET; .NET 4 has an ObjectCache which I’ll get into later. Microsoft has always been adamant that the ASP.NET cache is not intended for use outside of the web. But many people are still stuck in .NET 2.0 and .NET 3.5, and need something to work with, and this happens to work for many people, even though MSDN says clearly:

Note:

The Cache class is not intended for use outside of ASP.NET applications. It was designed and tested for use in ASP.NET to provide caching for Web applications. In other types of applications, such as console applications or Windows Forms applications, ASP.NET caching might not work correctly.

The class for the ASP.NET cache is System.Web.Caching.Cache in System.Web.dll. However, you cannot simply new-up a Cache object. You must acquire it from System.Web.HttpRuntime.Cache.

Cache cache = System.Web.HttpRuntime.Cache;

Working with the ASP.NET cache is documented on MSDN here.

Pros:

  1. It’s built-in.
  2. Despite the .NET 1.0 syntax, it’s fairly simple to use.
  3. When used in a web context, it’s well-tested. Outside of web contexts, according to Google searches it is not commonly known to cause problems, despite Microsoft recommending against it, so long as you’re using .NET 2.0 or later.
  4. You can be notified via a delegate when an item is removed, which is necessary if you need to keep it alive and you could not set the item’s priority in advance.
  5. Individual items have the flexibility of any of (a), (b), or (c) methods of expiration and removal in the list of removal methods at the top of this article. You can also associate expiration behavior with the presence of a physical file.

Cons:

  1. Not only is it static, there is only one. You cannot create your own type with its own static instance of a Cache. You can only have one bucket for your entire app, period. You can wrap the bucket with your own wrappers that do things like pre-inject prefixes in the keys and remove these prefixes when you pull the key/value pairs back out. But there is still only one bucket. Everything is lumped together. This can be a real nuisance if, for example, you have a service that needs to cache three or four different kinds of data separately. This shouldn’t be a big problem for pathetically simple projects. But if a project has any significant degree of complexity due to its requirements, the ASP.NET cache will typically not suffice.
  2. Items can disappear, willy-nilly. A lot of people aren’t aware of this—I wasn’t, until I refreshed my knowledge on this cache implementation. By default, the ASP.NET cache is designed to destroy items when it “feels” like it. More specifically, see (c) in my definition of a cache table at the top of this article. If another thread in the same process is working on something completely different, and it dumps high-priority items into the cache, then as soon as .NET decides it needs to require some memory it will start to destroy some items in the cache according to their priorities, lower priorities first. All of the examples documented here for adding cache items use the default priority, rather than the NotRemovable priority value which keeps it from being removed for memory-clearing purposes but will still remove it according to the expiration policy. Peppering CacheItemPriority.NotRemovable in cache invocations can be cumbersome, otherwise a wrapper is necessary.
  3. The key must be a string. If, for example, you are caching data records where the records are keyed on a long or an integer, you must convert the key to a string first.
  4. The syntax is stale. It’s .NET 1.0 syntax, even uglier than ArrayList or Hashtable. There are no generics here, no IDictionary<> interface. It has no Contains() method, no Keys collection, no standard events; it only has a Get() method plus an indexer that does the same thing as Get(), returning null if there is no match, plus Add(), Insert() (redundant?), Remove(), and GetEnumerator().
  5. Ignores the DRY principle of setting up your default expiration/removal behaviors so you can forget about them. You have to explicitly tell the cache how you want the item you’re adding to expire or be removed every time you add add an item.
  6. No way to access the caching details of a cached item such as the timestamp of when it was added. Encapsulation went a bit overboard here, making it difficult to use the cache when in code you’re attempting to determine whether a cached item should be invalidated against another caching mechanism (i.e. session collection) or not.
  7. Removal events are not exposed as events and must be tracked at the time of add.
  8. And if I haven’t said it enough, Microsoft explicitly recommends against it outside of the web. And if you’re cursed with .NET 1.1, you not supposed to use it with any confidence of stability at all outside of the web so don’t bother.

Method Two: The Enterprise Library Caching Application Block

Microsoft’s recommendation, up until .NET 4.0, has been that if you need a caching system outside of the web then you should use the Enterprise Library Caching Application Block.

The Microsoft Enterprise Library is a coordinated effort between Microsoft and a third party technology partner company Avanade, mostly the latter. It consists of multiple meet-many-needs general purpose technology solutions, some of which are pretty nice but many of which are in an outdated format such as a first-generation O/RM that doesn’t support LINQ.

I have personally never used the EntLib Caching App Block. It looks bloated. Others on the web have commented that they thought it was bloated, too, but once they started using it they saw that it was pretty simple and straightfoward. I personally am not sold, but since I haven’t tried it I cannot pass a fair judgment. So for whatever its worth, here are some pros/cons:

Pros:

  1. Recommended by Microsoft as the cache mechanism for non-web projects.
  2. The syntax looks to be familiar to those who were using the ASP.NET cache.

Cons:

  1. Not built-in. The assemblies must be downloaded and separately referenced.
  2. Not actually created by Microsoft. (EntLib is an Avanade solution that Microsoft endorses as its own.)
  3. The syntax is the same .NET 1.0 style stuff that I believe defaults to normal priority (auto-ejects the items when memory management feels like it) rather than NotRemovable priority, and does not reinforce the DRY principles of setting up your default expiration/removal behaviors so you can forget about them.
  4. Potentially bloated.

Method Three: .NET 4.0’s ObjectCache / MemoryCache

Microsoft finally implemented an abstract ObjectCache class in the latest version of the .NET Framework, and a MemoryCache implementation that inherits and implements ObjectCache for in-memory purposes in a non-web setting.

System.Runtime.Caching.ObjectCache is in the System.Runtime.Caching.dll assembly. It is an abstract class that that declares basically the same .NET 1.0 style interfaces that are found in the ASP.NET cache. System.Runtime.Caching.MemoryCache is the in-memory implementation of ObjectCache and is very similar to the ASP.NET cache, with a few changes.

To add an item with a sliding expiration, your code would look something like this:

var config = new NameValueCollection(); 
var cache = new MemoryCache("myMemCache", config); 
cache.Add(new CacheItem("a", "b"), 
new CacheItemPolicy 
{ 
Priority = CacheItemPriority.NotRemovable, 
SlidingExpiration=TimeSpan.FromMinutes(30) 
});

Pros:

  1. It’s built-in, and now supported and recommended by Microsoft outside of the web.
  2. Unlike the ASP.NET cache, you can instantiate a MemoryCache object instance.
    Note: It doesn’t have to be static, but it should be—that is Microsoft’s recommendation (see yellow Caution).
  3. A few slight improvements have been made vs. the ASP.NET cache’s interface, such as the ability to subscribe to removal events without necessarily being there when the items were added, the redundant Insert() was removed, items can be added with a CacheItem object with an initializer that defines the caching strategy, and Contains() was added.

Cons:

  1. Still does not fully reinforce DRY. From my small amount of experience, you still can’t set the sliding expiration TimeSpan once and forget about it. And frankly, although the policy in the item-add sample above is more readable, it necessitates horrific verbosity.
  2. It is still not generically-keyed; it requires a string as the key. So you can’t store as long or int if you’re caching data records, unless you convert to string.

Method Four: Build One Yourself

It’s actually pretty simple to create a caching dictionary that performs explicit or sliding expiration. (It gets a lot harder if you want items to be auto-removed for memory-clearing purposes.) Here’s all you have to do:

  1. Create a value container class called something like Expiring<T> or Expirable<T> that would contain a value of type T, a TimeStamp property of type DateTime to store when the value was added to the cache, and a TimeSpan that would indicate how far out from the timestamp that the item should expire. For explicit expiration you can just expose a property setter that sets the TimeSpan given a date subtracted by the timestamp.
  2. Create a class, let’s call it ExpirableItemsDictionary<K,T>, that implements IDictionary<K,T>. I prefer to make it a generic class with <K,T> defined by the consumer.
  3. In the the class created in #2, add a Dictionary<K,Expiring<T>> as a property and call it InnerDictionary.
  4. The implementation if IDictionary<K,T> in the class created in #2 should use the InnerDictionary to store cached items. Encapsulation would hide the caching method details via instances of the type created in #1 above.
  5. Make sure the indexer (this[]), ContainsKey(), etc., are careful to clear out expired items and remove the expired items before returning a value. Return null in getters if the item was removed.
  6. Use thread locks on all getters, setters, ContainsKey(), and particularly when clearing the expired items.
  7. Raise an event whenever an item gets removed due to expiration.
  8. Add a System.Threading.Timer instance and rig it during initialization to auto-remove expired items every 15 seconds. This is the same behavior as the ASP.NET cache.
  9. You may want to add an AddOrUpdate() routine that pushes out the sliding expiration by replacing the timestamp on the item’s container (Expiring<T> instance) if it already exists.

Microsoft has to support its original designs because its user base has built up a dependency upon them, but that does not mean that they are good designs.

Pros:

  1. You have complete control over the implementation.
  2. Can reinforce DRY by setting up default caching behaviors and then just dropping key/value pairs in without declaring the caching details each time you add an item.
  3. Can implement modern interfaces, namely IDictionary<K,T>. This makes it much easier to consume as its interface is more predictable as a dictionary interface, plus it makes it more accessible to helpers and extension methods that work with IDictionary<>.
  4. Caching details can be unencapsulated, such as by exposing your InnerDictionary via a public read-only property, allowing you to write explicit unit tests against your caching strategy as well as extend your basic caching implementation with additional caching strategies that build upon it.
  5. Although it is not necessarily a familiar interface for those who already made themselves comfortable with the .NET 1.0 style syntax of the ASP.NET cache or the Caching Application Block, you can define the interface to look like however you want it to look.
  6. Can use any type for keys. This is one reason why generics were created. Not everything should be keyed with a string.

Cons:

  1. Is not invented by, nor endorsed by, Microsoft, so it is not going to have the same quality assurance.
  2. Assuming only the instructions I described above are implemented, does not “willy-nilly” clear items for clearing memory on a priority basis (which is a corner-case utility function of a cache anyway .. BUY RAM where you would be using the cache, RAM is cheap).

     

Among all four of these options, this is my preference. I have implemented this basic caching solution. So far, it seems to work perfectly, there are no known bugs (please contact me with comments below or at jon-at-jondavis if there are!!), and I intend to use it in all of my smaller side projects that need basic caching. Here it is: 

Download: ExpirableItemDictionary.zip

Worthy Of Mention: AppFabric, NoSQL, Et Al

Notice that the title of this blog article indicates “Simple Caching”, not “Heavy-Duty Caching”. If you want to get into the heavy-duty stuff, you should look at memcached, AppFabric, ScaleOut, or any of the many NoSQL solutions that are shaking up the blogosphere and Q&A forums.

By “heavy duty” I mean scaled-out solutions, as in scaled across multiple servers. There are some non-trivial costs associated with scaling out

Scott Hanselman has a blog article called, “Installing, Configuring and Using Windows Server AppFabric and the ‘Velocity’ Memory Cache in 10 Minutes”. I already tried installing AppFabric, it was a botched and failed job, but I plan to try to tackle it again, now that hopefully Microsoft has updated their online documentation and provided us with some walk-throughs, i.e. Scott’s.

As briefly hinted in the introduction of this article, another approach to caching is using something like Lucene.Net to store database data locally. Lucene calls itself a “search engine”, but really it is a NoSQL [Wikipedia link] storage mechanism in the form of a queryable index of document-based tables (called “directories”).

Other NoSQL options abound, most of them being Linux-oriented like CouchDB (which runs but stinks on Windows) but not all of them. For example, there’s RavenDB from Oren Eini (author of the popular ayende.com blog) which is built entirely in and for .NET as a direct competitor to the likes of Lucene.net. There are a whole bunch of other NoSQL options listed over here.

I have had the pleasure of working with Lucene.net and the annoyance of poking at CouchDB on Windows (Linux folks should not write Windows software, bleh .. learn how to write Windows services, folks), but not much else. Perhaps I should take a good look at RavenDB next.

 

Currently rated 3.6 by 20 people

  • Currently 3.55/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

C# | Software Development

I Don’t Much Get Go

by Jon Davis 24. August 2010 01:53

When Google announced their new Go programming language, I was quite excited and happy. Yay, another language to fix all the world’s problems! No more suckage! Suckage sucks! Give me a good language that doesn’t suffer suckage, so that my daily routine can suck less!

And Google certainly presented Go as “a C++ fixxer-upper”. I just watched this video of Rob Pike describing the objectives of Go, and I can definitely say that Google’s mantra still lines up. The video demonstrates a lot of evils of C++. Despite some attempts at self-edumacation, I personally cannot read nor write real-world C++ because I get lost in the gobbligook that he demonstrated.

But here’s my comment on YouTube:

100% of the examples of "why C++ and Java suck" are C++. Java's not anywhere near that bad. Furthermore, The ECMA open standard language known as C#--brought about by a big, pushy company no different in this space than Google--already had the exact same objectives as Java and now Go had, and it has actually been fundamentally evolving at the core, such as to replace patterns with language features (as seen in lambdas and extension methods). Google just wanted to be INDEPENDENT, typical anti-MS.

While trying to take Go in with great excitement, I’ve been forced to conclude that Google’s own message delivery sucks, mainly by completely ignoring some of the successful languages in the industry—namely C# (as well as some of the lesser-used excellent languages out there)—much like Microsoft’s message delivery of C#’s core objectives somewhat sucked by refraining from mentioning Java even once when C# was announced (I spent an hour looking for the C# announcement white paper from 2000/2001 to back up this memory but I can’t find it). The video linked above doesn’t even show a single example of Java suckage; it just made these painful accusations of Java being right in there with C++ as being a crappy language to work with. I haven’t coded in Java in about a decade, honestly, but back then Java was the shiznat and code was beautiful, elegant, and more or less easy to work with.

Meanwhile, Java has been evolving in some strange ways and ultimately I find it far less appetizing than C#. But where is Google’s nod to C#? Oh that’s right, C# doesn’t exist, it’s a fragment of someone’s imagination because Google considers Microsoft (C#’s maintainer) a competitor, duh. This is an attitude that should make anyone automatically skeptical of the language creator’s true intentions, and therefore of the language itself. C# actually came about in much the same way as Go did as far as trying to “fix” C++. In fact, most of the problems Go describes of C++ were the focus of C#’s objectives, along with a few thousand other objectives. Amazingly, C# has met most of its objectives so far.

If we break down Google’s objectives themselves, we don’t see a lot of meat. What we find, rather, are Google employees trying to optimize their coding workflow for previously C++ development efforts using perhaps emacs or vi (Rob even listed IDEs as a failure in modern languages). Their requirements in Go actually appear to be rather trivial. It seems that they want to write quick-and-easy C-syntax-like code that doesn’t get in the way of their business objectives, that performs very fast, and fast compilation that lets them escape out of vi to invoke gcc or whatever compiler very quickly and go back to coding. These are certainly great nice-to-haves, but I’m pretty sure that’s about it.

Consider, in contrast, .NET’s objectives a decade ago, .NET being at the core of applied C# as C# runs on the CLR (the .NET runtime):

  • To provide a very high degree of language interoperability
    • Visual Basic and C++ and Java, oh my! How do we get them to talk to each other with high performance?
    • COM was difficult to swallow. It didn’t suck because its intentions were gorgeous—to have a language-netural marshalling paradigm between runtimes—but then the same objectives were found in CORBA, and that sucked.
    • Go doesn’t even have language interoperability. It has C (and only C) function invocators. Bleh! Google is not in the real world!
  • To provide a runtime environment that completely manages code execution
    • This in itself was not a feature, it was a liability. But it enabled a great deal, namely consolidating QA resources for low-level functionality, which in turn brought about instantaneous quality and productivity on Microsoft’s part across the many languages and the tools because fewer resources had to focus on duplicate details.
    • The Mono runtime can run a lot of languages now. It is slower than C++, but not by a significant level. A C# application, fully ngen’d (precompiled to machine-level code), will execute at roughly 90-95% of C++’s and thus theoretically Go’s performance, which frankly is pretty darn good.
  • To provide a very simple software deployment and versioning model
    • A real-world requirement which Google in its corporate and web sandboxes is oblivious to, I’m not sure that Go even has a versioning model
  • To provide high-level code security through code access security and strong type checking
    • Again, a real-world requirement which Google in its corporate and web sandboxes is oblivious to, since most of their code is only exposed to the public via HTML/REST/JSON/SOAP.
  • To provide a consistent object-oriented programming model
    • It appears that Go is not an OOP language. There is no class support in Go. No objects at all, really. Just primitives, arrays, and structs. Surpriiiiise!! :D
  • To facilitate application communication by using industry standards such as SOAP and XML.
  • To simplify Web application development
    • I really don’t see Google innovating here, instead they push Python and Java on their app cloud? I most definitely don’t see this applying to Go at all.
  • To support hardware independence and portability
    • Although the implementation of this (JIT) is a liability, the objective is sound. Old-skool Linux folks didn’t get this; it’s stupid to have to recompile an application’s distribution, software should be precompiled.
    • Java and .NET are on near-equal ground here. When Java originally came about, it was the silver bullet for “Write Once, Run Anywhere”. With the successful creation and widespread adoption of the Mono runtime, .NET has the same portability. Go, however, requires recompilation. Once again, Google is not out in the real world, they live in a box (their headquarters and their exposed web).

And with the goals of C#,

  • C# language is intended to be a simple, modern, general-purpose, object-oriented programming language.
    • Go: “OOP is cruft.”
  • The language, and implementations thereof, should provide support for software engineering principles such as strong type checking, array bounds checking, detection of attempts to use uninitialized variables, and automatic garbage collection. Software robustness, durability, and programmer productivity are important.
    • Go: “Um, check, maybe. Especially productivity. Productivity means clean code.”
    • (As I always say, the more you know, the more you realize how little you know. Clearly you think you’ve got it all down, little Go.)
  • The language is intended for use in developing software components suitable for deployment in distributed environments.
    • Go: “Yeah we definitely want that. We’re Google.”
  • Source code portability is very important, as is programmer portability, especially for those programmers already familiar with C and C++.
    • Go: “Just forget C++. It’s bad. But the core syntax (curly braces) is much the same, so ... check!”
  • Support for internationalization is very important.
  • C# is intended to be suitable for writing applications for both hosted and embedded systems, ranging from the very large that use sophisticated operating systems, down to the very small having dedicated functions.
    • Go: “Check!”
    • (Yeah, except that Go isn’t an applications platform. At all. So, no. Uncheck that.)
  • Although C# applications are intended to be economical with regard to memory and processing power requirements, the language was not intended to compete directly on performance and size with C or assembly language.

Right now, Go just looks like a syntax with a few basic support classes for I/O and such. I must confess I was somewhat unimpressed by what I saw at Go’s web site (http://golang.org/) because the language does not look like much of a readability / maintainability improvement to what Java and C# offered up.

  • Go supposedly offers up memory management, but still heavily uses pointers. (C# supports pointers, too, by the way, but since pointers are not safe you must declare your code as “containing unsafe code”. Most C# code strictly uses type-checked references.)
  • Go eliminates semicolons as statement terminators. “…they are inserted automatically at the end of every line that looks like the end of a statement…” Sorry, but semicolons did not make C++ unreadable or unmaintainable
    Personally I think code without punctuation (semicolons) looks like English grammar without punctuations (no period)
    You end up with what look like run-on sentences
    Of course they’re not run-on sentences, they’re just lazily written ones with poor grammar
    wat next, lolcode?
  • “{Sample tutorial code} There is no implicit this and the receiver variable must be used to access members of the structure.” Wait, what, what? Hey, I have an idea, let’s make all functions everywhere static!
  • Actually, as far as I can tell, Go doesn’t have class support at all. It just has primitives, arrays, and structs.
  • Go uses the := operator syntax rather than the = operator for assignment. I suppose this would help eliminate the issue where people would type = where they meant to type == and destroy their variables.
  • Go has a nice “defer” statement that is akin to C#’s using() {} and try...finally blocks. It allows you to be lazy and disorganized such that late-executed code that should called after immediate code doesn’t require putting it below immediate code, we can just sprinkle late-executed code in as we go. We really needed that. (Except, not.) I think defer’s practical applicability is for some really lightweight AOP (Aspect Oriented Programming) scenarios, except that defer is a horrible approach to it.
  • Go has both new() and make(). I feel like I’m learning C++ again. It’s about those pesky pointers ...
    • Seriously, how the heck is
       
        var p *[]int = new([]int) // allocates slice structure; *p == nil; rarely useful
        var v  []int = make([]int, 100) // the slice v now refers to a new array of 100 ints

       
      .. a better solution to “improving upon” C++ with a new language than, oh I don’t know ..
       
        int[] p = null; // declares an array variable; p is null; rarely useful
        var v = new int[100]; // the variable v now refers to a new array of 100 ints

       
      ..? I’m sure I’m missing something here, particularly since I don’t understand what a “slice” is, but I suspect I shouldn’t care. Oh, nevermind, I see now that it “is a three-item descriptor containing a pointer to the data (inside an array), the length, and the capacity; until those items are initialized, the slice is nil.” Great. More pointer gobbligook. C# offers richly defined System.Array and all this stuff is transparent to the coder who really doesn’t need to know that there are pointers, somewhere, associated with the reference to your array, isn’t that the way it all should be? Is it really necessary to have a completely different semantic (new() vs. make())? Ohh yeah. The frickin pointer vs. the reference.
  • I see Go has a fmt.Printf(), plus a fmt.Fprintf(), plus a fmt.Sprintf(), plus Print() plus Println(). I’m beginning to wonder if function overloading is missing in Go. I think it is; http://golang.org/search?q=overloading
  • Go has “goroutines”. It’s basically, “go func() { /* do stuff */ }” and it will execute the code as a function on the fly, in parallel. In C# we call these anonymous delegates, and delegates can be passed along to worker thread pool threads on the fly with only one line of code, so yes, it’s supported. F# (a young .NET sibling of C#) has this, too, by the way, and its support for inline anonymous delegate declarations and spawning them off in parallel is as good as Go’s.
  • Go has channels for communication purposes. C# has WCF for this which is frankly a mess. The closest you can get to Go on the CLR as far as channels go is Axum, which is variation of C# with rich channel support.
  • Go does not throw exceptions. It panics, from which it might recover.

While I greatly respect the contributions Google has made to computing science, and their experience in building web-scalable applications (that, frankly, typically suck at a design level when they aren’t tied to the genius search algorithms), and I have no doubt that Google is an experienced web application software developer with a lot of history, honestly I think they are clueless when it comes to real-world applications programming solutions. Microsoft has been demonized the world over since its beginnings, but one thing they and few others have is some serious, serious real-world experience with applications. Between all of the web sites and databases and desktop applications combined everywhere on planet Earth through the history of man, Microsoft has probably been responsible for the core applications plumbing for the majority of it all, followed perhaps by Oracle. (Perhaps *nix and applications and services that run on it has been the majority; if nothing else, Microsoft has most certainly still had the lead in software as a company, to which my point is targeted.)

It wasn’t my intention to make this a Google vs. Microsoft debate, but frankly the fact that Go presentations neglect C# severely causes question to Go’s trustworthiness.

In my opinion, a better approach to what Google was trying to do with Go would be to take a popular language, such as C#, F#, or Axum, and break it away from the language’s implementation libraries, i.e. the .NET platform’s BCL, replacing them with the simpler constructs, support code, and lightweight command-line tooling found in Go, and then wrap the compiler to force it to natively compile to machine code (ngen). Honestly, I think that would be both a) a much better language and runtime than Go because it would offer most of the benefits of Go but in a manner that retains most or all of the advantages of the selected runtime (i.e. the CLR’s and C#’s multitude of advantages over C/C++), but also b) a flop, and a waste of time, because C# is not really broken. Coupled with F#, et al, our needs are quite well met. So thanks anyway, Google, but, really, you should go now.

Currently rated 3.2 by 6 people

  • Currently 3.166667/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , ,

C# | F# | General Technology | Mono | Opinion | Software Development

Gemli.Data: Basic LINQ Support

by Jon Davis 6. June 2010 04:13

In the Development branch of Gemli.Data, I finally got around to adding some very, very basic LINQ support. The following test scenarios currently seem to function correctly:

var myCustomEntityQuery = new DataModelQuery<DataModel<MockObject>>();
// Scenarios 1-3: Where() lambda, boolean operator, method exec
var linqq = myCustomEntityQuery.Where(mo => mo.Entity.MockStringValue == "dah");
linqq = myCustomEntityQuery.Where(mo => mo.Entity.MockStringValue != "dah");
// In the following scenario, GetPropertyValueByColumnName() is an explicitly supported method
linqq = myCustomEntityQuery.Where(mo=>((int)mo.GetPropertyValueByColumnName("customentity_id")) > -1);
// Scenario 4: LINQ formatted query
var q = (from mo in myCustomEntityQuery
where mo.Entity.MockStringValue != "st00pid"
select mo) as DataModelQuery<DataModel<MockObject>>;
// Scenario 5: LINQ formatted query execution with sorted ordering
var orderedlist = (from mo in myCustomEntityQuery
where mo.Entity.MockStringValue != "def"
orderby mo.Entity.MockStringValue
select mo).ToList();
// Scenario 6: LINQ formatted query with multiple conditions and multiple sort members
var orderedlist = (from mo in myCustomEntityQuery
where mo.Entity.MockStringValue != "def" && mo.Entity.ID < 3
orderby mo.Entity.ID, mo.Entity.MockStringValue
select mo).ToList();

This is a great milestone, one I’m very pleased with myself for finally accomplishing. There’s still a ton more to do but these were the top 50% or so of LINQ support scenarios needed in Gemli.Data.

Unfortunately, adding LINQ support brought about a rather painful rediscovery of critical missing functionality: the absence of support for OR (||) and condition groups in Gemli.Data queries. *facepalm*  I left it out earlier as a to-do item but completely forgot to come back to it. That’s next on my plate. *burp*

Microsoft: We’re Not Stupid

by Jon Davis 2. May 2010 23:40

We get it, Microsoft. You want us to use Azure. You want us to build highly scalable software that will run in the cloud—your cloud. And we’ll get the wonderful support from Microsoft if we choose Azure. Yes, Microsoft. We get it.

We’re not stupid. You play up Azure like it’s a development skill, or some critical piece of the Visual Studio development puzzle, but we recognize that Azure is a proprietary cloud service that you’re advertising, not an essential tool chain component. Now go away. Please. Stop diluting the MSDN Magazine articles and the msdev Facebook app status posts with your marketing drivel about Azure. You are not going to get any checks written out from me for hosted solutions. We know that you want to profit from this. Heck, we even believe it might actually be a half-decent hosting service. But, Microsoft, you didn’t invent the cloud, there are other clouds out there, so tooling for your operating system using Visual Studio does not mean that I need to know diddly squat about your tirelessly hyped service.

There are a lot of other things you can talk about and still make a buck off of your platform. You can talk about how cool WPF is as a great way to build innovative Windows-only products. You can focus on how fast SQL Server 2008 R2 is and how Oracle wasted their money on a joke (mySQL). You can play up the wonderful extensibility of IIS 7 and all the neat kinds of innovative networked software you can build with it. Honestly, I don’t even know what you should talk about because you’re the ones who know the info, not me.

But Microsoft it’s getting really boring to hear the constant hyping of Azure. I’ve already chosen how my stuff will be hosted, and that’s not going to change right now. So honestly, I really don’t care.

Maybe I need to explain why I don’t care.

Microsoft, there are only two groups of people who are going to choose your ridiculously wonderful and bloated cloud: established mid-market businesses with money to spend, and start-ups with a lot of throw-away capital who drank your kool aid. You shouldn’t worry about those people. The people you should worry about are those who will choose against it, and will have made their decision firmly.

First of all, I believe most enterprises will not want to put their data on a cloud, certainly not with a standardized set of cloud interfaces. It’s too great a security risk. Amazon’s true OS cloud is enticing because companies can roll their own APIs with proprietary APIs and have them talk to each other while rolling out VM instances on a whim. They have sufficient tooling outside of cloud-speak to write what they need and to do what needs doing. But for the most part, companies want to keep internal data internal.

Second, we geeks don’t fiddle a whole lot with accounting and taking corporate risks. We focus on writing code. That code has to be portable. It has to run locally, on a dedicated IIS server, or in a cloud. Writing code that deploys to your cloud—whether a true cloud or locally for testing, it doesn’t matter—if it doesn’t run equally well in other environments it’s at best a redundant effort and at worst a potentially wasted one. We have to write code for your cloud and then we have to write code for running without your cloud. We most certainly would not be comfortable writing code that only runs on your cloud, but the mangled way your cloud APIs are marketed we might as well bet the whole farm on it. And that just ain’t right.

See, I don’t like going into anything not knowing I can pull out and look at alternatives at any time without having completely wasted my efforts. If I’m going to write code for Azure, I want to be assured that the code will have the same functionality outside of Azure. But since Azure APIs only run in the Azure cloud, and Azure cannot be self-hosted (outside of localhost debugging), I don’t have that assurance. Hence, I as a geek and as an entrepreneur have no interest in Azure.

When I choose a tool chain, I choose it for its toolset, not for its target environment. I already know that Windows Server with IIS is adequate for the scale of runtimes I work with. When I choose a hosting service, I choose it expecting to be very low-demand but with potential for high demand. I don’t want to pay out the nose for that potential. I often experiment with different solutions and discover a site’s market potential. But I don’t go in expecting to make a big buck—I only go in hoping to.

What would gain my interest in Azure? Pretty much the only thing that would have me give Azure even a second glance would be a low-demand (low traffic, low CPU, low storage, and low memory) absolutely free account, whereby I am simply billed if I go over my limit. If free’s no good, then a flat ridiculously low rate, like $10/mo for reasonable usage and a reasonable rate when I go over. A trial is unacceptable. I’m not going to develop for something that is going to only be a trial. And I also prefer a reasonable flat rate for low-demand usage over a generated-per-use one. I prefer to have an up-front idea of how much things will cost. I don’t have time to keep adjusting my budget. I don’t want to have to get billed first in order to see what my monthly cost will be.

I’m actually paying quite a bit of money for my Windows 2008 VPS, but the nice thing about it is there are no surprises, the server will handle the load, and if I ever exceed its capacity I can just get another account. Whereas, cloud == surprises. You have to do a lot of manual number crunching in order to determine what your bill is going to look like. “Got a lot of traffic this month? We got you covered, we automatically scaled for you. Now here’s your massive bill!”

Let’s put it this way, Microsoft. If you keep pushing Azure at me, I can abandon your tool chain completely and stick with my other $12/mo Linux VM that would meet my needs for a low-demand server on which I still have the support of some magnificent open source communities, and if my needs grow I can always instance another $12/mo account. Honestly, the more diluted the developer discussions are with Azure hype, the more inclined I am to go down that path. (Although, I’ll admit it’ll take a lot more to get me to go all the way with that.)

Just stop, please. I have no problem with Azure, you can put your banner ads and printed ads into everything I touch, I’m totally fine with that. What is really upsetting to me is when magazine and related content, both online and printed, is taken up to hype your proprietary cloud services, and I really feel like I’m getting robbed as an MSDN subscriber.

Just keep in mind, we’re not stupid. We do know service marketing versus helpful development tips when we see it. You’re only hurting yourselves when you push the platform on us like we’re lemmings. Speaking for myself, I’m starting to dread what should have been a wonderful year of 2010 with the evolution of the Microsoft tool chain.

 

[UPDATE: According to this, Azure will someday be independently hostable. That's better. Now I might start paying attention.] 

Currently rated 3.7 by 3 people

  • Currently 3.666667/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

Peeves | Software Development | Web Development

Gemli.Data: Pagination and Counts

by Jon Davis 5. October 2009 03:54

It occurred to me that I’m not blogging enough about how Gemli is shaping, other than mentioning the project on the whole in completely different blog posts.

In the dev branch I’ve dropped the abstract base DataModelQuery class (only DataModelQuery<TModel> remains) and replaced it with an IDataModelQuery interface. This was a long-needed change, as I was really only using it for its interfaces anyway. It was being used because I didn’t necessarily know the TModel type until runtime, namely in the DataProviderBase class which handles all the deep loads with client-side-joins-by-default behavior. But an interface will work for that just as well, and meanwhile the query chaining syntax behavior was totally inconsistent and in some cases useless.

Some recent changes that have been made in the dev branch include some more syntactical sugar, namely the support for loading and saving directly from a DataModel/DataModel<T> class or from a DataModelQuery<TModel> plus support for pagination and getting record counts.

Quick Loads

From DataModel:

List<MyPoco> collectionOfMyPoco = DataModel<MyPoco>.LoadAll().Unwrap<MyPoco>();

There are also Load() and LoadMany() on DataModel, but since they require a DataModelQuery object as a parameter, and DataModelQuery has new SelectFirst() and SelectMany() that do the same thing (forward the task on to the DataProvider), I’m not sure they’re needed and I might just delete them.

From DataModelQuery:

List<MyPoco> collectionOfMyPoco = DataModel<MyPoco>.NewQuery()
    .WhereProperty["Amount"].IsGreaterThan(5000m)
    .SelectMany().Unwrap<MyPoco>();

Pagination

Since Gemli is being built for web development, pagination support with pagination semantics (“page 3”, rather than “the rows between rows X and Y”) is a must. There are many things besides data grids that require pagination—actually, pretty much any list needs pagination if you intend not to show as many items on a page as there are in a database table.

Client-side pagination is implemented by default, but only if DB server-side pagination is not handled by the data provider. I still intend to add DB-side pagination for SQL Server.

// get rows 61-80, or page 4 @ 20 items per page, of my filtered query
var myListOfStuff = DataModel<Stuff>.NewQuery()
    .WhereProperty["IsActive"].IsEqualTo(true)
    .Page[4].OfItemsPerPage(20)
    .SelectMany().Unwrap<Stuff>();

To add server-side pagination support for a proprietary database, you can inherit DbDataProvider and override the two variations of CreateCommandBuilder() (one takes a DataModel for saving, the other takes a DataModelQuery for loading). The command builder object has a HandlePagination property that can be assigned a delegate. The delegate would then add or modify properties in the command builder that inject appropriate SQL text into the command.

Count

Support for a basic SELECT COUNT(*) FROM MyTable WHERE [..conditions..] is a pretty obvious part of any minimal database interaction library (O/RM or what have you). For this reason, GetCount<TModel>(DataModelQuery query) has been added to DataProviderBase, and in DbDataProvider it replaces the normal LoadModel command builder generated text with SELECT COUNT(*)…  The DataModelQuery<TModel> class also has a new SelectCount() method which forwards the invocation to the DataProvider.

long numberOfPeepsInMyNetwork = DataModel<Person>.NewQuery()
    .WhereProperty["Networkid"].IsEqualTo(myNetwork.ID)
    .SelectCount();

All of these changes are still pending another alpha release later. But if anyone wants to tinker it’s all in the dev branch in source code at http://gemli.codeplex.com/ .

LINQ May Be Coming

I still have a lot of gruntwork to do to get this next alpha built, particularly in adding a lot more tests. I still haven’t done enough testing of deep loads and deep saves and I’m not even confident that deep saves are working correctly at the basic level as I haven’t yet used them. Once all of that is out of my way, I might start looking at adding LINQ support. I’m not sure how awfully difficult LINQ support will prove to be yet, but I’m certain that it’s doable.

One of the biggest advantages of LINQ support is strongly typed member checking. For example, in Gemli, you currently have to declare your WHERE clauses with a string reference to the member name or the column name, whereas with LINQ the member name can be referenced as strongly-typed code. As far as I know, it does this by way of syntactic sugar that ultimately boils down to delegates that literally work with the object directly, depending on how the LINQ support was added.

Among the other advantages of adding LINQ support might be in adding support for anonymous types. For example, right now without LINQ you’re limited to DataModels and the class structures they represent. But LINQ lets you select specific members into your return object, for example, rather than the whole enchilada.

LINQ:

// only return ID and Name, not the whole Product
var mySelection = (from p in Product
                   select p.ID, p.Name).ToList();

It’s this underlying behavior that creates patterns for me to work with that I might drag into play the support for aggregate functions in SQL using terse Gemli+LINQ semantics. Right now, it’s literally impossible for Gemli to select the result of an aggregate function (other than Count(*)) unless that was wrapped in a stored procedure on the DB side and then wrapped in a DataModel on Gemli’s side. There’s also no GROUP BY support, etc. I like to believe that LINQ will help me consolidate the interface patterns I need to make all that work correctly. However, so far I’m still just one coder as no one has jumped on board with Gemli yet so we’ll see if LINQ support makes its way in at all.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

Pet Projects | Software Development


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  May 2019  >>
MoTuWeThFrSaSu
293012345
6789101112
13141516171819
20212223242526
272829303112
3456789

View posts in large calendar