Fediverse Microblogging Protocols: Part 1

by Jon Davis 3. November 2019 15:19

You may or may not have heard of "the Fediverse". No? How about Mastodon? If not that, surely you have heard of gab.com

Gab.com was rebuilt a year or so ago upon Mastodon's open source software system which is a microblogging web platform. It is a branded part of the Fediverse, which is a series of microblogging sites that are all able and potentially willing (via a human decision and a configuration) to talk to each other. Effectively, this is fail-safe distributed computing with social microblogging (Twitter clones) as the form of application. The objective with these frameworks was to establish a censorship-proof system of web hosting, so that people who wanted to enjoy microblogging topics that were considered outside of generally socially acceptable could always have a place to go on the Internet, because the microblogging hosts are effectively proxied out by many servers. One might say it is the dark web originating on the open, public web/Internet. 

The Fediverse was also created largely by the alt-left. These are the bizarre perverts of the world, such as nudists, or furries, people who fantasize bestiality typically via hand-drawn anime art or cosplay. Until recently, sexual deviants were considered too far strayed from the norms of society to be allowed a place on the Internet, so censorship was frequent, which is a key motivator of the Fediverse. However, gab.com, which also struggles with a history of censorship, facilitates thoughts and content from people from all walks of life, including some people on the extreme alt-right--Neo-Nazis and the like. So gab.com forked Mastodon to recreate the gab.com social network on the Mastodon framework, replacing, adding, and removing features according to Gab's whim's and needs. Needless to say, the Masdodon community proved furious about this, that alt-right zealots are allowed to have an uncensored platform, but they made this bed. 

Meanwhile I personally have been very studiously monitoring social networking technologies, companies, and trends since the beginning. I was a participant from the beginning. Like, the very, very beginning. GeoCities was the socially driven personal expression platform of choice at the beginning. My first web site was on GeoCities, until I started hosting my own domain. Then came MySpace and blogging. Everyone had "a MySpace". Then, from out of nowhere, Facebook took off, and the personalized rebranding of an individual's personal profile on MySpace was deemed exotically ugly and forever forgotten in favor of the standardized look and feel of Facebook's fixed layout and feel. All the while, Facebook's mixing of "wall" posts from various users--friends and followed feeds--was something new and amazing. It filled the gap lost by the antiquation of NNTP newsgroups and message boards. It completely changed the landscape of Internet social networking. Then came Twitter and YouTube. With YouTube, sharing your life on video became not just possible, it became a normal way for people to experience pseudo-relationships of mindsharing, complete with facial expressions and body language, with the connected world around them. The people of the world became truly cohesive, the Internet was the glue.

And then people who had traditionally held power over people's minds began to panic. Facebook and Google and the like saw their roles in the world as guidance mechanisms for swaying world opinion, and mainstream media (MSM) began to have their published opinions (*cough* .. "news") prioritized as curated "Trends". YouTube stopped allowing the popularity of everyday people to reveal their priority in searches and top-level browsing, and instead now prioritizes packaged infotainment like CNN as the primary resources to be found for any given topic. And Twitter, Facebook, YouTube, Instagram, and so on, these are all perpetually being mentioned in the (alternative) news headlines for censoring people for sharing thoughts and opinions that go against the preconceived narratives that mainstream media and big tech employees (most of which are in coastal-state or otherwise high population cities, I might add, notably San Francisco, Portland, Los Angeles, and New York City) are trying to guide the world to embrace.

This is unacceptable, but at the same time expected. As a conservative Christian, I must admit that we had "quite a ride" of privileged authority in driving society for so many decades, centuries, yet we were told by Jesus to expect that the world would hate us, that we would be the underdog. That hasn't happened in our Western society, not really, not in my lifetime. So I can't be angry that conservatism has started to move towards the back seat. Nor should I suggest we are somehow "victims". We're not. The victims are not in the West. The victims are in China, in Pakistan, in India, all over the place, other than the West. The West's time is coming. It isn't here yet, despite "progress" in multiculturalism efforts. But it's coming. This has been instilled in me since pre-Matrix anticipation even in the music I listened to as a youth.

So my interest is in watching for, supporting, and participating in censor-proof platforms, "for everyone" but including conservative Christians. This is why I've invested in Gab. I've paid for a lifetime pro membership with Gab. But I've only just gotten started.

Gab.com made their software (their fork of Mastodon) open source and published it at https://code.gab.com/. They invite the Internet community to clone it and set up Gab Social instances all over the Internet. Presumably people have. So, I tried to. I can't. Trying on both my home workstation and on my laptop, both which run on Windows 10, I am stuck with local setup failure, starting the error that webpack-dev-server never actually got itself up. Gab Social's repository doesn't seem to have a support group, there doesn't seem to be a Gab Social group on gab.com, and no one replied to my post on gab, so I'm left in the dark here, and with this I'm also seeing some serious community-related limitations of the platform in the first place.

My motives to run Gab Social and work on its code myself were indeterminate, it's not truly open source for the gab.com platform so much as disclosed source, but at minimum I wanted to see what it consisted of. But even though I can't get it up and running, and be able to adapt it to my needs and interests--without, say, who knows, maybe somehow $buying an audience with someone at gab?--I can at least see on the surface what indeed it consists of.

The Gab Social software stack consists of:

  • PostgreSQL,
  • Ruby on Rails,
  • Node.JS, 
  • Redis, and
  • ReactJS,
  • with deployments on Docker
Knowing that the software is based on Mastodon, which is a branded implementation player on the Fediverse, this means that it runs on one or more of these protocols:
 
Knowing the protocol(s) used for Fediverse participating software is important because it means that I as a Microsoft stack .NET developer might be able to build a microblogging-oriented social network infrastructure that is compatible with Gab Social, and perhaps integrate with it (at least to the extent of it being on the same Fediverse), without actually utilizing any of its RoR/NodeJS/etc implementation, not that I'm fundamentally opposed to RoR/NodeJS/etc.
 
I will be following up with a Part 2 of this blog entry when I have done more homework on these and have come to a better understanding of what's been going on here. Sadly, all of the bullet points in the last bullet points list above are foreign to me, which means I haven't been monitoring social networking tech as closely as I should have, which is particulalry shameful considering I have a number of false starts of social network platform framework side projects over the years, albeit those efforts effectively stopped right around the time these seem to have begun. 

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , ,

Blog | General Technology | Software Development | Web Development | Microblogging

Prelude to a Blogging Reset

by Jon Davis 19. October 2019 22:55

I started blogging in around 2002/2003 when Blogger and Radio UserLand were inspirations for me to create a desktop blogging app I called PowerBlog. 

Archive.org: https://web.archive.org/web/20031010203956/http://powerblog.net/
PowerBlog (beta release)

My motivations were three:

  1. To grow my skill set as a software developer.
  2. To see if I could maybe make a living building and selling home-grown software. (PowerBlog was available for sale, but didn't sell.)
  3. To write, and write, and write. My brain was churning, and I wanted to divulge my thoughts out loud. 

PowerBlog was written in VB6. It consisted of an instance of the Internet Explorer WebBrowser COM object to drive a true WYSIWYG experience, design philosophies similar to those of Outlook Express (an e-mail and newsgroup client), and a publish mechanism component model including support for user-defined publishing scripts using the Active Scripting (VBScript/JScript) runtime component. When .NET v1.1 came out, I rewrote PowerBlog, reusing the same components from Microsoft but adding more features like XML-RPC--by which I built my own client/server libraries with self-documenting HTML output similar to ASP.NET's .asmx behavior--and style synchronization so that the user could edit in the same CSS style as the page itself. I tried to sell PowerBlog for $10 to $20 per download. It didn't sell. That's kind of okay, the value of my efforts was found in my skills ramp-up and in my having a tool for my own writing.

PowerBlog got Microsoft's attention. Three things happened, all of a sudden, with Microsoft: 1) the Windows Live initiative kicked off blogging from their web site (which they've since shuttered), 2) Microsoft interviewed me, and 3) Windows Live Writer was written, stealing most of the good ideas I had in PowerBlog (admittedly tidbits of the ideas of PowerBlog were stolen from another app called W.bloggar). I got nothin for it but some bragging rights for the ideas. 

I was broke. My motivators to go out on my own (taxes-related need to go offline to do some research) subsided. So I stopped development and went back out into the workforce. When .NET 2.0 came out along with a new version of Internet Explorer, it broke PowerBlog permanently. Microsoft published WebBrowser components that directly collided with the WebBrowser components I had created for PowerBlog. So PowerBlog isn't available today--it will just crash for you if you try--and I couldn't even build my own source code anymore, without finding an old Windows XP box with an old version of Visual Studio and .NET Framework 1.1\. It just wasn't worth the hassle, so I never bothered. Maintenance was immediately halted. PowerBlog.net went offline. I didn't much care. I did need to keep blogging and writing, so eventually I found BlogEngine.net and transferred my technical blog content to it. I updated the blog engine code once or twice as it evolved, but eventually I just let it sit dormant.

So that's where my tech blog instance stands today. http://www.jondavis.net/techblog/ runs on a decade old version of BlogEngine.net which I've tailored a bit to include banned IPs and some home-grown CAPTCHA functionality for the comments. I've since looked at a few other blogging engines over the years I've considered adopting, most notably I remember looking at NBlog, which gave me the most control but required me to implement it in code and didn't sufficiently demonstrate black box modularity, and Ghost, which would grow me in NodeJS but ticked me off in its stance on only using Markdown and never WYSIWYG HTML editing. 

Meanwhile, the whole time, developers and non-developers alike have pooh-poohed the interestingness of blogs, blogging software, blogging strategies, blogging services, etc. I can certainly see why. At its most basic level, blogs are little more than single-table CRUD apps. You have an index of recent blogs, you have a detail view of an individual blog post, you have an editor and creator page, and you can delete a blog post. The content of a blog post is, to the casual observer, nothing but four components: title, body, author, and publish date. If that's what you think a blog is, you're naive, and likely didn't bother reading this far.

At the very least, on the technical side, blogging brought about a few innovations that revolutionized the Internet, including: 

  • XML-RPC (SOAP's midget predecessor)
  • RSS
  • Publish pings
  • Atom
  • OPML
Not to mention less specific innovations, like end user accessible management of slugs (URL paths) for enhancing SEO. And the whole notion of blogging became the basis of the World Wide Web's notion of community interconnectivity. The broad set of Web sites was social networking, before social networks took over social networking. Blog sites were known, and linked to each other. Followers of blogs created view counts that filled the hole that Facebook post likes fill today. 
 
This is perhaps why dev.to is such a success story; it's like a Facebook group, for coders, where every single post is a blog article.
 
Anyway, so now I sit here typing into the ten year old deployment of BlogEngine.net on my resurrected tech blog, wondering what my next steps should be. Well, I'm pretty sure I want to create my own blog engine again, and build on it. Yes, it's a two decades stale idea, but here are my motives:
 
  • My blog philosophies still differ somewhat from the offerings currently available (although not far at all from BlogEngine.net)
  • I want to keep growing my technical skill set (not just write about them). There are revised web and software standards I need to freshen up on, and apply my learnings. Creating a blog site has always been a good mechanism to sample and apply the most basic web tech skills.
  • I need anything I deploy to a technical blog to portray my own personal portfolio, including the site on which my writings are displayed
  • I still like the idea of taking something I myself created and putting it out on github as a complete package and perhaps managing a hosted instance of it, similar to WordPress.
All this said, I don't think I'll be doing a full-blown BlogEngine.net equivalence. It will be a simple thing, and from that I'll take some ideas to glean from and apply them to other software efforts I'll be working on. 
 
But a few things I will point out about the technical strategies and approaches I intend to implement:
  1. Rich front-end libs (React, Vue, Blazor, etc)?? While I might take, say, Ghost's approach in having some rich interactivity for the author's experience, the end user (reader) experience cannot be forced to utilize them. A blog page must be rich in HTML semantics and index well SEO-wise for the web at large, so the HTTP response payload cannot be Javascript stub placeholders for dynamic content.
  2. Componentization is more important than anything else. I do intend to iterate over a lot of the features this old instance of BlogEngine.net has as well as current blog engines, and figure out ways to spoonfeed those features without embedding them deeploy in interdependencies. Worst thing I could do is embed all the bells and whistles directly into ASP.NET Razor Pages themselves. I intend to apply decorator pattern principles, perhaps applying sidecar architecture for some components where appropriate. Right now I'm looking at using Identity Server 4, for example, as a sidecar architecture for author identity and privileges. Speaking of which,
  3. Everything must be readily run-anywhere, black-box-capable deployable as ad hoc code launched with self-hosting, Azure web app service (PaaS), Docker-contained, or VM (IaaS). I also intend to set up a hosted instance and offer limited blog hosting for free, extended features for a fee. In this sense, WordPress is a competing platform.
  4. "Extended features, like ..?"  Premium templates. Large storage hosting. And premium add-on features yet to be defined.
  5. The templates can be based on Razor, but "pages" cannot be tightly defined. If this platform grows up, it needs to be a more than a blog, it needs to be a mini-CMS (content management system).
  6. Social networking ecosystem is a mandatory consideration; full blogging must integrate with microblogging and multi-contributor cross-pollination including across disparate sites and systems; more on this later, but imagine trusted Twitter users liking Facebook posts.
Again, yes, I am unpacking a two decades old solution proposal to a Web 1.0 / 2.0 problem, but that problem never really went away, it just became less prominent. I really don't anticipate this blowing up to being much of anything. At a minimum, I need to convert all of my content here on this technical blog (at http://www.jondavis.net/techblog/) to a home-grown platform of my own making, even if after doing that I call it done and walk away.
 
Another motivator for getting that done is the fact that just writing this, I'm writing with very tiny text on a very high resolution screen, and you're probably squinting reading this, too, if you're reading it from my web site, and if you actually made it this far. Again, yes, I could simply just replace the template and maybe refresh the version of BlogEngine.net which I'm using. But, why do that, when I call myself a 23-years-plus experienced web development veteran and could just build my own? Am I really so useless in my aging as to be unable to build something of my own again? Not like a blog engine will get me a better job, duh, so much as the fact that using someone else's engine just plain looks bad on me. So, this is a lightweight test of mettle. I need to do this because I need to.
 
On a final note, I'll mention again what I'm focusing on first: "who am I"--authentication and authorization. I'm learning up on Identity Server 4. I'm still new to it, so this will take some time. Right now it looks like the black box version of my blog engine will come with an instance of Identity Server 4, based perhaps on ASP.NET Core Identity as the user store; developers can tweak it out to their heart's content. I'm still mulling over whether to just embed it into the blog app (underkill? too much in one?) or separate it out as a separate SSO-oriented web app (overkill? too many distributed parts for a mere blog?), but at this point I'll likely do the former for the black box download (source code included) and the latter for the hosted instance which I hope to set up and offer to people wordpress.com-style.

 

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Blog | General Technology | Web Development

Technology Status Update 2016

by Jon Davis 10. July 2016 09:09

Hello, peephole. (people.) Just a little update. I've been keeping this blog online for some time, my most recent blog entries always so negative, I keep having to see that negativity every time I check to make sure the blog's up, lol. I'm tired of it so I thought I'd post something positive.

My current job is one I hope to keep for years and years to come, and if that doesn't work out I'll be looking for one just like it and try to keep it for years and years to come. I'm so done with contracting and consulting (except the occasional mentoring session on code mentor -dot- io). I'm still developing, of course, and as technology is changing, here's what's up as I see it. 

  1. Azure is relevant. 
    image
    The world really has shifted to cloud and the majority of companies, finally, are offloading their hosting to the cloud. AWS, Azure, take your pick, everyone who hates Microsoft will obviously choose AWS but Azure is the obvious choice for Microsoft stack folks, there is nothing meaningful AWS has that Azure doesn't at this point. The amount of stuff on Azure is sufficiently terrifying in quantity and supposed quality enough to give me a thrill. So I'm done with hating on Azure, after all their marketing and nagging and pushing, Microsoft has crossed a threshold of market saturation that I am adequately impressed. I guess that means I have to be an Azure fan, too, now. Fine. Yay Azure, woo. -.-
  2. ASP.NET is officially rebooted. 
    image
    So I hear this thing called ASP.NET Core 1.0 formerly known as ASP.NET 5 formerly known as ASP.NET vNext has RTM'd, and I hear it's like super duper important. It snuck by me, I haven't mastered it, but I know it enought to know a few things:
    • It's a total redux by means of redo. It's like the Star Trek reboot except it’s smaller and there are fewer planets it can manage, but it’s exactly like the Star Trek reboot in that it will probably implode yours.
    • If you've built your career on ASP.NET and you want to continue living on ASP.NET's laurals, now is not the time to master ASP.NET 1.0 Core. Give it another year or two to mature. 
    • If you're stuck on or otherwise fascinated by non-Microsoft operating systems, namely Mac and Linux, but you want to use the Microsoft programming stack, you absolutely must learn and master ASP.NET Core 1.0 and EF7.
    • If all you liked from ASP.NET Core 1.0 was the dynamic configs and build-time transpiles, you don't need ASP.NET Core for that LOL LOL ROFLMAO LOL LOL LOL *cough*
  3. The Aurelia Javascript framework is nearly ready.
    image
    Overall, Javascript framework trends have stopped. Companies are building upon AngularJS 1.x. Everyone who’s behind is talking about React as if it was new and suddenly newly relevant (it isn’t new anymore). Everyone still implementing Knockout are out of the loop and will die off soon enough. jQuery is still ubiquitous and yet ignored as a thing, but meanwhile it just turned v3.

    I don’t know what to think about things anymore. Angular 2.0 requires TypeScript, people hate TypeScript because they hate transpilers. People are still comparing TypeScript with CoffeeScript. People are dumb. If it wasn’t for people I might like Angular 2.0, and for that matter I’d be all over AureliaJS, which is much nicer but just doesn’t have Google as the titanic marketing arm. In the end, let’s just get stuff done, guys. Build stuff. Don’t worry about frameworks. Learn them all as you need them.
  4. Node.js is fading and yet slowly growing in relevance.
    image
    Do you remember .. oh heck unless you're graying probably not, anyway .. do you remember back in the day when the dynamic Internet was first loosed on the public and C/C++ and Perl were used to execute from cgi-bin, and if you wanted to add dynamic stuff to a web site you had to learn Perl and maybe find Perl pearls and plop them into your own cgi-bin? Yeah, no, I never really learned Perl, either, but I did notice the trend, but in the end, what did C/C++ and Perl mean to us up until the last decade? Answer: ubiquitous availability, but not web server functionality, just an ever-present availability for scripts, utilities, hacks, and whatever. That is where node.js is headed. Node.js for anything web related has become and will continue to be a gigantic mess of disorganized, everyone-is-equal, noisily integrated modules that sort of work but will never be as stable in built compositions as more carefully organized platforms. Frankly, I see node.js being more relevant as a workstation runtime than a server runtime. Right now I'm looking at maybe poking at it in a TFS build environment, but not so much for hosting things.
    I will always have a bitter taste in my mouth with node.js after trying to get socket.io integrated with Express and watching the whole thing just crumble, with no documentation or community help to resolve it, and this happened not just once on the job (never resolved before I walked away) but also during a code-mentor mentoring session (which we didn't figure out), even after a good year or so of maturity of the platform after the first instance. I still like node.js but will no longer be trying to build a career on it.
  5. Pay close attention and learn up on Swagger aka OpenAPI. 
    image
    Remember when -- oh wait, no, unless you're graying, .. nevermind .. anyway, -- once upon a time something called SOAP came out and it came with it a self-documentation feature that was a combination of WSDL and some really handy HTML generated scaffolding built into web services that would let you manually test SOAP-based services by filling out a self-generated form. Well now that JSON-based REST is the entirety of the playing field, we need the same self-documention. That's where Swagger came in a couple years ago and everyone uses it now. Swagger needs some serious overhauling--someone needs to come up with a Swagger-compliant UI built on more modular and configurable components, for example--but as a drop-in self-documentation feature for REST services it fits the bill.
    • Swagger can be had on .NET using a lib called Swashbuckle. If you use OData, there is a lib called Swashbuckle.OData. We use it very, very heavily where I work. (I was the one who found it and brought it in.) "Make sure it shows up and works in Swagger" is a requirement for all REST/OData APIs we build now.
    • Swagger is now OpenAPI but it's still Swagger, there are not yet any OpenAPI artifacts that I know of other than Swagger. Which is lame. Swagger is ugly. Featureful, but ugly, and non-modular.
    • Microsoft is listed as a contributing member of the OpenAPI committee, but I don't know what that means, and I don't see any generic output from OpenAPI yet. I'm worried that Microsoft will build a black box (rather than white box) Swagger-compliant alternative for ASP.NET Core.
    • Other curious ones to pay attention to, but which I don't see as significantly supported by the .NET community yet (maybe I haven't looked hard enough), are:
  6. OData v4 has potential but is implementation-heavy and sorely needs a v5. 
    image
    A lot of investments have been made in OData v4 as a web-based facade to Entity Framework data resources. It's the foundation of everything the team I'm with is working on, and I've learned to hate it. LOL. But I see its potential. I hope investments continue because it is sorely missing fundamental features like
    • MS OData needs better navigation property filtering and security checking, whether by optionally redirecting navigation properties to EDM-mapped controller routes (yes, taking a performance hit) or some other means
    • MS OData '/$count' breaks when [ODataRoute] is declared, boo.
    • OData spec sorely needs "DISTINCT" feature
    • $select needs to be smarter about returning anonymous models and not just eliminating fields; if all you want is one field in a nested navigation property in a nested navigation property (equivalent of LINQ's .Select(x=>new {ID=x.ID, DesiredField2=x.Child.Child2.DesiredField2}), in the OData result set you will have to dive into an array and then into an array to find the one desired field
    • MS OData output serialization is very slow and CPU-heavy
    • Custom actions and functions and making them exposed to Swagger via Swashbuckle.OData make me want to pull my hair out, it takes sometimes two hours of screaming and choking people to set up a route in OData where it would take me two minutes in Web API, and in the end I end up with a weird namespaced function name in the route like /OData/Widgets/Acme.GetCompositeThingmajig(4), there's no getting away from even the default namespace and your EDM definition must be an EXACT match to what is clearly obviously spelled out in the C# controller implementation or you die. I mean, if Swashbuckle / Swashbuckle.OData can mostly figure most of it out without making us dress up in a weird Halloween costume, surely Microsoft's EDM generator should have been able to.
  7. "Simple CRUD apps" vs "messaging-oriented DDD apps"
    has become the new Microsoft vs Linux or C# vs Java or SQL vs NoSQL. 

    imageThe war is really ugly. Over the last two or three years people have really been talking about how microservices and reaction-oriented software have turned the software industry upside down. Those who hop on the bandwagon are neglecting to know when to choose simpler tooling chains for simple jobs, meanwhile those who refuse to jump on the bandwagon are using some really harsh, cruel words to describe the trend ("idiots", "morons", etc). We need to learn to love and embrace all of these forms of software, allow them to grow us up, and know when to choose which pattern for which job.
    • Simple CRUD apps can still accomplish most business needs, making them preferable most of the time
      • .. but they don't scale well
      • .. and they require relatively very little development knowledge to build and grow
    • Non-transactional message-oriented solutions and related patterns like CQRS-ES scale out well but scale developers' and testers' comprehension very poorly; they have an exponential scale of complexity footprint, but for the thrill seekers they can be, frankly, hella fun and interesting so long as they are not built upon ancient ESB systems like SAP and so long as people can integrate in software planning war rooms.
    • Disparate data sourcing as with DDD with partial data replication is a DBA's nightmare. DBAs will always hate it, their opinions will always be biased, and they will always be right in their minds that it is wrong and foolish to go that route. They will sometimes be completely correct.

  8. Integrated functional unit tests are more valuable than TDD-style purist unit tests. That’s my new conclusion about developer testing in 2016. Purist TDD mindset still has a role in the software developer’s life. But there is still value in automated integration tests, and when things like Entity Framework are heavily in play, apparently it’s better to build upon LocalDB automation than Moq.
    At least, that’s what my current employer has forced me to believe. Sadly, the purist TDD mindset that I tried to adopt and bring to the table was not even slightly appreciated. I don’t know if I’m going to burn in hell for being persuaded out of a purist unit testing mindset or not. We shall see, we shall see.
  9. I'm hearing some weird and creepy rumors I don't think I like about SQL Server moving to Linux and eventually getting itself renamed. I don't like it, I think it's unnecessary. Microsoft should just create another product. Let SQL Server be SQL Server for Windows forever. Careers are built on such things. Bad Microsoft! Windows 8, .NET Framework version name fiascos, Lync vs Skype for Business, when will you ever learn to stop breaking marketing details to fix what is already successful??!
  10. Speaking of SQL Server, SQL Server 2016 is RTM'd, and full blown SSMS 2016 is free.
  11. On-premises TFS 2015 only just recently acquired gated check-in build support in a recent update. Seriously, like, what the heck, Microsoft? It's also super buggy, you get a nasty error message in Visual Studio while monitoring its progress. This is laughable.
    • Clear message from Microsoft: "If you want a premium TFS experience, Azure / Visual Studio Online is where you have to go." Microsoft is no longer a shrink-wrapped product company, they sell shrink wrapped software only for the legacy folks as an afterthought. They are hosted platform company now all the way. .
      • This means that Windows 10 machines including Nokia devices are moving to be subscription boxes with dumb client deployments. Boo.
  12. imageAnother rumor I've heard is that
    Microsoft is going to abandon the game industry.

    The Xbox platform was awesome because Microsoft was all in. But they're not all in anymore, and it shows, and so now as they look at their lackluster profits, what did they expect?
    • Microsoft: Either stay all-in with Xbox and also Windows 10 (dudes, have you seen Steam's Big Picture mode? no excuse!) or say goodbye to the consumer market forever. Seriously. Because we who thrive on the Microsoft platform are also gamers. I would recommend knocking yourselves over to partner with Valve to co-own the whole entertainment world like the oligarchies that both of you are since Valve did so well at keeping the Windows PC relevant to the gaming markets.

For the most part I've probably lived under a rock, I'm sure, I've been too busy enjoying my new 2016 Subaru WRX (a 4-door racecar) which I am probably going to sell in the next year because I didn't get out of debt first, but not before getting a Kawasaki Vulcan S ABS Café as my first motorized two-wheeler, riding that between playing Steam games, going camping, and exploring other ways to appreciate being alive on this planet. Maybe someday I'll learn to help the homeless and unfed, as I should. BTW, in the end I happen to know that "love God and love people" are the only two things that matter in life. The rest is fluff. But I'm so selfish, man do I enjoy fluff.  I feel like such a jerk. Those who know me know that I am one. God help me.

image2016-Kawasaki-Vulcan-S-ABS-Cafe1 
image  image
Top row: Fluff that doesn’t matter and distracts me from matters of substance.
Bottom row: Matters of substance.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

ASP.NET | Blog | Career | Computers and Internet | Cool Tools | LINQ | Microsoft Windows | Open Source | Software Development | Web Development | Windows

Announcing Fast Koala, an alternative to Slow Cheetah

by Jon Davis 17. July 2015 19:47

So this is a quick FYI for teh blogrollz that I have recently been working on a little Visual Studio extension that will do for web apps what Slow Cheetah refused to do. It enables build-time transformations for both web apps and for Windows apps and classlibs. 

Here's the extension: https://visualstudiogallery.msdn.microsoft.com/7bc82ddf-e51b-4bb4-942f-d76526a922a0  

Here's the Github: https://github.com/stimpy77/FastKoala

Either link will explain more.

Introducing XIO (xio.js)

by Jon Davis 3. September 2013 02:36

I spent the latter portion last week and the bulk of the holiday fleshing out the initial prototype of XIO ("ecks-eye-oh" or "zee-oh", I don't care at this point). It was intended to start out as an I/O library targeting everything (get it? X I/O, as in I/O for x), but that in turn forced me to make it a repository library with RESTful semantics. I still want to add stream-oriented functionality (WebSocket / long polling) to it to make it truly an I/O library. In the mean time, I hope people can find it useful as a consolidated interface library for storing and retrieving data.

You can access this project here: https://github.com/stimpy77/xio.js#readme

Here's a snapshot of the README file as it was at the time of this blog entry.



XIO (xio.js)

version 0.1.1 initial prototype (all 36-or-so tests pass)

A consistent data repository strategy for local and remote resources.

What it does

xio.js is a Javascript resource that supports reading and writing data to/from local data stores and remote servers using a consistent interface convention. One can write code that can be more easily migrated between storage locations and/or URIs, and repository operations are simplified into a simple set of verbs.

To write and read to and from local storage,

xio.set.local("mykey", "myvalue");
var value = xio.get.local("mykey")();

To write and read to and from a session cookie,

xio.set.cookie("mykey", "myvalue");
var value = xio.get.cookie("mykey")();

To write and read to and from a web service (as optionally synchronous; see below),

xio.post.mywebservice("mykey", "myvalue");
var value = xio.get.mywebservice("mykey")();

See the pattern? It supports localStorage, sessionStorage, cookies, and RESTful AJAX calls, using the same interface and conventions.

It also supports generating XHR functions and providing implementations that look like:

mywebservice.post("mykey", "myvalue");
var value = mywebservice.get("mykey")(); // assumes synchronous; see below
Optionally synchronous (asynchronous by default)

Whether you're working with localStorage or an XHR resource, each operation returns a promise.

When the action is synchronous, such as in working with localStorage, it returns a "synchronous promise" which is essentially a function that can optionally be immediately invoked and it will wrap .success(value) and return the value. This also works with XHR when async: false is passed in with the options during setup (define(..)).

The examples below are the same, only because XIO knows that the localStorage implementation of get is synchronous.

Aynchronous convention: var val; xio.get.local('mykey').success(function(v) { val = v; });

Synchronous convention: var val = xio.get.local('mykey')();

Generated operation interfaces

Whenever a new repository is defined using XIO, a set of supported verb and their implemented functions is returned and can be used as a repository object. For example:

var myRepository = xio.define('myRepository', { 
    url: '/myRepository?key={0}',
    methods: ["GET", "POST", "PUT", "DELETE"]
});

.. would populate the variable myRepository with:

{
    get: function(key) { /* .. */ },
    post: function(key, value) { /* .. */ },
    put: function(key, value) { /* .. */ },
    delete: function(key) { /* .. */ }
}

.. and each of these would return a promise.

XIO's alternative convention

But the built-in convention is a bit unique using xio[action][repository](key, value) (i.e.xio.post.myRepository("mykey", {first: "Bob", last: "Bison"}), which, again, returns a promise.

This syntactical convention, with the verb preceding the repository, is different from the usual convention of_object.method(key, value).

Why?!

The primary reason was to be able to isolate the repository from the operation, so that one could theoretically swap out one repository for another with minimal or no changes to CRUD code. For example,

var repository = "local"; // use localStorage for now; 
                          // replace with "my_restful_service" when ready 
                          // to integrate with the server
xio.post[repository](key, value).complete(function() {

    xio.get[repository](key).success(function(val) {
        console.log(val);
    });

});

Note here how "repository" is something that can move around. The goal, therefore, is to make disparate repositories such as localStorage and RESTful web service targets support the same features using the same interface.

As a bit of an experiment, this convention of xio[verb][repository] also seems to read and write a little better, even if it's a bit weird at first to see. The thinking is similar to the verb-target convention in PowerShell. Rather than taking a repository and working with it independently with assertions that it will have some CRUD operations available, the perspective is flipped and you are focusing on what you need to do, the verbs, first, while the target becomes more like a parameter or a known implementation of that operation. The goal is to dumb down CRUD operation concepts and repositories and refocus on the operations themselves so that, rather than repositories having an unknown set of operations with unknown interface styles and other features, instead, your standard CRUD operations, which are predictable, have a set of valid repository targets that support those operations.

This approach would have been entirely unnecessary and pointless if Javascript inherently supported interfaces, because then we could just define a CRUD interface and write all our repositories against those CRUD operations. But it doesn't, and indeed with the convention of closures and modules, it really can't.

Meanwhile, when you define a repository with xio.define(), as was described above and detailed again below, it returns an object that contains the operations (get(), post(), etc) that it supports. So if you really want to use the conventional repository[method](key, value) approach, you still can!

Download

Download here: https://raw.github.com/stimpy77/xio.js/master/src/xio.js

To use the whole package (by cloning this repository)

.. and to run the Jasmine tests, you will need Visual Studio 2012 and a registration of the .json file type with IIS / IIS Express MIME types. Open the xio.js.csproj file.

Dependencies

jQuery is required for now, for XHR-based operations, so it's not quite ready for node.js. This dependency requirement might be dropped in the future.

Basic verbs

See xio.verbs:

  • get(key)
  • set(key, value); used only by localStorage, sessionStorage, and cookie
  • put(key, data); defaults to "set" behavior when using localStorage, sessionStorage, or cookie
  • post(key, data); defaults to "set" behavior when using localStorage, sessionStorage, or cookie
  • delete(key)
  • patch(key, patchdata); implemented based on JSON/Javascript literals field sets (send only deltas)
Examples
// initialize

var xio = Xio(); // initialize a module instance named "xio"
localStorage
xio.set.local("my_key", "my_value");
var val = xio.get.local("my_key")();
xio.delete.local("my_key");

// or, get using asynchronous conventions, ..    
var val;
xio.get.local("my_key").success(function(v) 
    val = v;
});

xio.set.local("my_key", {
    first: "Bob",
    last: "Jones"
}).complete(function() {
    xio.patch.local("my_key", {
        last: "Jonas" // keep first name
    });
});
sessionStorage
xio.set.session("my_key", "my_value");
var val = xio.get.session("my_key")();
xio.delete.session("my_key");
cookie
xio.set.cookie(...)

.. supports these arguments: (key, value, expires, path, domain)

Alternatively, retaining only the xio.set["cookie"](key, value), you can automatically returned helper replacer functions:

xio.set["cookie"](skey, svalue)
    .expires(Date.now() + 30 * 24 * 60 * 60000))
    .path("/")
    .domain("mysite.com");

Note that using this approach, while more expressive and potentially more convertible to other CRUD targets, also results in each helper function deleting the previous value to set the value with the new adjustment.

session cookie
xio.set.cookie("my_key", "my_value");
var val = xio.get.cookie("my_key")();
xio.delete.cookie("my_key");
persistent cookie
xio.set.cookie("my_key", "my_value", new Date(Date.now() + 30 * 24 * 60 * 60000));
var val = xio.get.cookie("my_key")();
xio.delete.cookie("my_key");
web server resource (basics)
var define_result =
    xio.define("basic_sample", {
                url: "my/url/{0}/{1}",
                methods: [ xio.verbs.get, xio.verbs.post, xio.verbs.put, xio.verbs.delete ],
                dataType: 'json',
                async: false
            });
var promise = xio.get.basic_sample([4,12]).success(function(result) {
   // ..
});
// alternatively ..
var promise_ = define_result.get([4,12]).success(function(result) {
   // ..
});

The define() function creates a verb handler or route.

The url property is an expression that is formatted with the key parameter of any XHR-based CRUD operation. The key parameter can be a string (or number) or an array of strings (or numbers, which are convertible to strings). This value will be applied to the url property using the same convention as the typical string formatters in other languages such as C#'s string.Format().

Where the methods property is defined as an array of "GET", "POST", etc, for each one mapping to standard XIO verbs an XHR route will be internally created on behalf of the rest of the options defined in the options object that is passed in as a parameter to define(). The return value of define() is an object that lists all of the various operations that were wrapped for XIO (i.e. get(), post(), etc).

The rest of the options are used, for now, as a jQuery's $.ajax(..., options) parameter. The async property defaults to false. When async is true, the returned promise is wrapped with a "synchronous promise", which you can optionally immediately invoke with parens (()) which will return the value that is normally passed into .success(function (value) { .. }.

In the above example, define_result is an object that looks like this:

{
    get: function(key) { /* .. */ },
    post: function(key, value) { /* .. */ },
    put: function(key, value) { /* .. */ },
    delete: function(key) { /* .. */ }
}

In fact,

define_result.get === xio.get.basic_sample

.. should evaluate to true.

Sample 2:

var ops = xio.define("basic_sample2", {
                get: function(key) { return "value"; },
                post: function(key,value) { return "ok"; }
            });
var promise = xio.get["basic_sample2"]("mykey").success(function(result) {
   // ..
});

In this example, the get() and post() operations are explicitly declared into the defined verb handler and wrapped with a promise, rather than internally wrapped into XHR/AJAX calls. If an explicit definition returns a promise (i.e. an object with .success and .complete), the returned promise will not be wrapped. You can mix-and-match both generated XHR calls (with the url and methods properties) as well as custom implementations (with explicit get/post/etc properties) in the options argument. Custom implementations will override any generated implementations if they conflict.

web server resource (asynchronous GET)
xio.define("specresource", {
                url: "spec/res/{0}",
                methods: [xio.verbs.get],
                dataType: 'json'
            });
var val;
xio.get.specresource("myResourceAction").success(function(v) { // gets http://host_server/spec/res/myResourceAction
    val = v;
}).complete(function() {
    // continue processing with populated val
});
web server resource (synchronous GET)
xio.define("synchronous_specresources", {
                url: "spec/res/{0}",
                methods: [xio.verbs.get],
                dataType: 'json',
                async: false // <<==!!!!!
            });
var val = xio.get.synchronous_specresources("myResourceAction")(); // gets http://host_server/spec/res/myResourceAction
web server resource POST
xio.define("contactsvc", {
                url: "svcapi/contact/{0}",
                methods: [ xio.verbs.get, xio.verbs.post ],
                dataType: 'json'
            });
var myModel = {
    first: "Fred",
    last: "Flinstone"
}
var val = xio.post.contactsvc(null, myModel).success(function(id) { // posts to http://host_server/svcapi/contact/
    // model has been posted, new ID returned
    // validate:
    xio.get.contactsvc(id).success(function(contact) {  // gets from http://host_server/svcapi/contact/{id}
        expect(contact.first).toBe("Fred");
    });
});
web server resource (DELETE)
xio.delete.myresourceContainer("myresource");
web server resource (PUT)
xio.define("contactsvc", {
                url: "svcapi/contact/{0}",
                methods: [ xio.verbs.get, xio.verbs.post, xio.verbs.put ],
                dataType: 'json'
            });
var myModel = {
    first: "Fred",
    last: "Flinstone"
}
var val = xio.post.contactsvc(null, myModel).success(function(id) { // posts to http://host_server/svcapi/contact/
    // model has been posted, new ID returned
    // now modify:
    myModel = {
        first: "Carl",
        last: "Zeuss"
    }
    xio.put.contactsvc(id, myModel).success(function() {  /* .. */ }).error(function() { /* .. */ });
});
web server resource (PATCH)
xio.define("contactsvc", {
                url: "svcapi/contact/{0}",
                methods: [ xio.verbs.get, xio.verbs.post, xio.verbs.patch ],
                dataType: 'json'
            });
var myModel = {
    first: "Fred",
    last: "Flinstone"
}
var val = xio.post.contactsvc(null, myModel).success(function(id) { // posts to http://host_server/svcapi/contact/
    // model has been posted, new ID returned
    // now modify:
    var myModification = {
        first: "Phil" // leave the last name intact
    }
    xio.patch.contactsvc(id, myModification).success(function() {  /* .. */ }).error(function() { /* .. */ });
});
custom implementation and redefinition
xio.define("custom1", {
    get: function(key) { return "teh value for " + key};
});
xio.get.custom1("tehkey").success(function(v) { alert(v); } ); // alerts "teh value for tehkey";
xio.redefine("custom1", xio.verbs.get, function(key) { return "teh better value for " + key; });
xio.get.custom1("tehkey").success(function(v) { alert(v); } ); // alerts "teh better value for tehkey"
var custom1 = 
    xio.redefine("custom1", {
        url: "customurl/{0}",
        methods: [xio.verbs.post],
        get: function(key) { return "custom getter still"; }
    });
xio.post.custom1("tehkey", "val"); // asynchronously posts to URL http://host_server/customurl/tehkey
xio.get.custom1("tehkey").success(function(v) { alert(v); } ); // alerts "custom getter still"

// oh by the way,
for (var p in custom1) {
    if (custom1.hasOwnProperty(p) && typeof(custom1[p]) == "function") {
        console.log("custom1." + p); // should emit custom1.get and custom1.post
    }
}

Future intentions

WebSockets and WebRTC support

The original motivation to produce an I/O library was actually to implement a WebSockets client that can fallback to long polling, and that has no dependency upon jQuery. Instead, what has so far become implemented has been a standard AJAX interface that depends upon jQuery. Go figure.

If and when WebSocket support gets added, the next step will be WebRTC.

Meanwhile, jQuery needs to be replaced with something that works fine on nodejs.

Additionally, in a completely isolated parallel path, if no progress is made by the ASP.NET SignalR team to make the SignalR client freed from jQuery, xio.js might become tailored to be a somewhat code compatible client implementation or a support library for a separate SignalR client implementation.

Service Bus, Queuing, and background tasks support

At an extremely lightweight scale, I do want to implement some service bus and queue features. For remote service integration, this would just be more verbs to sit on top of the existing CRUD operations, as well as WebSockets / long polling / SignalR integration. This is all fairly vague right now because I am not sure yet what it will look like. On a local level, however, I am considering integrating with Web Workers. It might be nice to use XIO to manage deferred I/O via the Web Workers feature. There are major limitations to Web Workers, however, such as no access to the DOM, so I am not sure yet.

Other notes

If you run the Jasmine tests, make sure the .json file type is set up as a mime type. For example, IIS and IIS Express will return a 403 otherwise. Google reveals this: http://michaellhayden.blogspot.com/2012/07/add-json-mime-type-to-iis-express.html

License

The license for XIO is pending, as it's not as important to me as getting some initial feedback. It will definitely be an attribution-based license. If you use xio.js as-is, unchanged, with the comments at top, you definitely may use it for any project. I will drop in a license (probably Apache 2 or BSD or Creative Commons Attribution or somesuch) in the near future.

A Consistent Approach To Client-Side Cache Invalidation

by Jon Davis 10. August 2013 17:40

Download the source code for this blog entry here: ClientSideCacheInvalidation.zip

TL;DR?

Please scroll down to the bottom of this article to review the summary.

I ran into a problem not long ago where some JSON results from an AJAX call to an ASP.NET MVC JsonResult action were being cached by the browser, quite intentionally by design, but were no longer up-to-date, and without devising a new approach to route manipulation or any of the other fundamental infrastructural designs for the endpoints (because there were too many) our hands were tied. The caching was being done using the ASP.NET OutputCacheAttribute on the action being invoked in the AJAX call, something like this (not really, but this briefly demonstrates caching):

[OutputCache(Duration = 300)]
public JsonResult GetData()
{
return Json(new
{
LastModified = DateTime.Now.ToString()
}, JsonRequestBehavior.AllowGet);
}
@model dynamic
@{
ViewBag.Title = "Home";
}
<h2>Home</h2>
<div id="results"></div>
<div><button id="reload">Reload</button></div>
@section scripts {
<script>
var $APPROOT = "@Url.Content("~/")";
$.getJSON($APPROOT + "Home/GetData", function (o) {
$('#results').text("Last modified: " + o.LastModified);
});
$('#reload').on('click', function() {
window.location.reload();
});
</script>
}

Since we were using a generalized approach to output caching (as we should), I knew that any solution to this problem should also be generalized. My first thought was in the mistaken assumption that the default [OutputCache] behavior was to rely on client-side caching, since client-side caching was what I was observing while using Fiddler. (Mind you, in the above sample this is not the case, it is actually server-side, but this is probably because of the amount of data being transferred. I’ll explain after I explain what I did in my false assumption.)

Microsoft’s default convention for implementing cache invalidation is to rely on “VaryBy..” semantics, such as varying the route parameters. That is great except that the route and parameters were currently not changing in our implementation.

So, my initial proposal was to force the caching to be done on the server instead of on the client, and to invalidate when appropriate.

 

public JsonResult DoSomething()
{
// 
// Do something here that has a side-effect
// of making the cached data stale
// 
Response.RemoveOutputCacheItem(Url.Action("GetData"));
return Json("OK");
}
[OutputCache(Duration = 300, Location = OutputCacheLocation.Server)]
public JsonResult GetData()
{
return Json(new
{
LastModified = DateTime.Now.ToString()
}, JsonRequestBehavior.AllowGet);
}

 

 

<button id="invalidate">Invalidate</button></div>

 

 

$('#invalidate').on('click', function() {
$.post($APPROOT + "Home/DoSomething", null, function(o) {
window.location.reload();
}, 'json');
});

 

image
While Reload has no effect on the Last modified value, the
Invalidate button causes the date to increment.

When testing, this actually worked quite well. But concerns were raised about the payload of memory on the server. Personally I think the memory payload in practically any server-side caching is negligible, certainly if it is small enough that it would be transmitted over the wire to a client, so long as it is measured in kilobytes or tens of kilobytes and not megabytes. I think the real concern is that transmission; the point of caching is to make the user experience as smooth and seamless as possible with minimal waiting, so if the user is waiting for a (cached) payload, while it may be much faster than the time taken to recalculate or re-acquire the data, it is still measurably slower than relying on browser cache.

The default implementation of OutputCacheAttribute is actually OutputCacheLocation.Any. This indicates that the cached item can be cached on the client, on a proxy server, or on the web server. From my tests, for tiny payloads, the behavior seemed to be caching on the server and no caching on the client; for a large payload from GET requests with querystring parameters seemed to be caching on the client but with an HTTP query with an “If-Modified-Since” header, resulting in a 304 Not Modified on the server (indicating it was also cached on the server but verified by the server that the client’s cache remains valid); and for a large payload from GET requests with all parameters in the path, the behavior seemed to be caching on the client without any validation checking from the client (no HTTP request for an If-Modified-Since check). Now, to be quite honest I am only guessing that these were the distinguishing factors of these behavior observations. Honestly, I saw variations of these behaviors happening all over the place as I tinkered with scenarios, and this was the initial pattern I felt I was observing.

At any rate, for our purposes we were currently stuck with relying on “Any” as the location, which in theory would remove server-side caching if the server ran short on RAM (in theory, I don’t know, although the truth can probably be researched, which I don’t have time to get into). The point of all this is, we have client-side caching that we cannot get away from.

So, how do you invalidate the client-side cache? Technically, you really can’t. The browser controls the cache bucket and no browsers provide hooks into the cache to invalidate them. But we can get smart about this, and work around the problem, by bypassing the cached data. Cached HTTP results are stored on the basis of varying by the full raw URL on HTTP GET methods, they are cached with an expiration (in the above sample’s case, 300 seconds, or 5 minutes), and are only cached if allowed to be cached in the first place as per the HTTP header directives in the HTTP response. So, to bypass the cache you don’t cache, or you need to know up front how long the cache should remain until it expires—neither of these being acceptable in a dynamic application—or you need to use POST instead of GET, or you need to vary up the URL.

Microsoft originally got around the caching problem in ASP.NET 1.x by forcing the “normal” development cycle in the lifecycle of <form> tags that always used the POST method over HTTP. Responses from POST requests are never cached. But POSTing is not clean as it does not follow the semantics of the verbiage if nothing is being sent up and data is only being retrieved.

You can also use ETag in the HTTP headers, which isn’t particularly helpful in a dynamic application as it is no different from a URL + expiration policy.

To summarize, to control cache:

  • Disable caching from the server in the Response header (Pragma: no-cache)
  • Predict the lifetime of the content and use an expiration policy
  • Use POST not GET
  • Etag
  • Vary the URL (case-sensitive)

Given our options, we need to vary up the URL. There a number of approaches to this, but almost all of the approaches involve relying on appending or modifying the querystring with parameters that are expected to be ignored by the server.

$.getJSON($APPROOT + "Home/GetData?_="+Date.now(), function (o) {
$('#results').text("Last modified: " + o.LastModified);
});

In this sample, the URL is appended with “?_=”+Date.now(), resulting in this URL in the GET:

/Home/GetData?_=1376170287015

This technique is often referred to as cache-busting. (And if you’re reading this blog article, you’re probably rolling your eyes. “Duh.”) jQuery inherently supports cache-busting, but it does not do it on its own from $.getJSON(), it only does it in $.ajax() when the options parameter includes {cache: false}, unless you invoke $.ajaxSetup({ cache: false }); first to disable all caching. Otherwise, for $.getJSON() you would have to do it manually by appending the URL. (Alright, you can stop rolling your eyes at me now, I’m just trying to be thorough here..)

This is not our complete solution. We have a couple problems we still have to solve.

First of all, in a complex client codebase, hacking at the URL from application logic might not be the most appropriate approach. Consider if you’re using Backbone.js with routes that synchronize objects to and from the server. It would be inappropriate to modify the routes themselves just for cache invalidation. A more generalized cache invalidation technique needs to be implemented in the XHR-invoking AJAX function itself. The approach in doing this will depend upon your Javascript libraries you are using, but, for example, if jQuery.getJSON() is being used in application code, then jQuery.getJSON itself could perhaps be replaced with an invalidation routine.

var gj = $.getJSON;
$.getJSON = function (url, data, callback) {
url = invalidateCacheIfAppropriate(url); // todo: implement something like this
return gj.call(this, url, data, callback);
};

This is unconventional and probably a bad example since you’re hacking at a third party library, a better approach might be to wrap the invocation of $.getJSON() with an application function.

var getJSONWrapper = function (url, data, callback) {
url = invalidateCacheIfAppropriate(url); // todo: implement something like this
return $.getJSON(url, data, callback);
};

And from this point on, instead of invoking $.getJSON() in application code, you would invoke getJSONWrapper, in this example.

The second problem we still need to solve is that the invalidation of cached data that derived from the server needs to be triggered by the server because it is the server, not the client, that knows that client cached data is no longer up-to-date. Depending on the application, the client logic might just know by keeping track of what server endpoints it is touching, but it might not! Besides, a server endpoint might have conditional invalidation triggers; the data might be stale given specific conditions that only the server may know and perhaps only upon some calculation. In other words, invalidation needs to be pushed by the server.

One brute force, burdensome, and perhaps a little crazy approach to this might be to use actual “push technology”, formerly “Comet” or “long-polling”, now WebSockets, implemented perhaps with ASP.NET SignalR, where a connection is maintained between the client and the server and the server then has this open socket that can push invalidation flags to the client.

We had no need for that level of integration and you probably don’t either, I just wanted to mention it because it might come back as food for thought for a related solution. One scenario I suppose where this might be useful is if another user of the web application has caused the invalidation, in which case the current user will not be in the request/response cycle to acquire the invalidation flag. Otherwise, it is perhaps a reasonable assumption that invalidation is only needed, and only triggered, in the context of a user’s own session. If not, perhaps it is a “good enough” assumption even if it is sometimes not true. The expiration policy can be set low enough that a reasonable compromise can be made between the current user’s changes and changes invoked by other systems or other users.

While we may not know what server endpoint might introduce the invalidation of client cache data, we could assume that the invalidation will be triggered by any server endpoint(s), and build invalidation trigger logic on the response of server HTTP responses.

To begin implementing some sort of invalidation trigger on the server I could flag invalidations to the client using HTTP header(s).

public JsonResult DoSomething()
{
//
// Do something here that has a side-effect
// of making the cached data stale
//
InvalidateCacheItem(Url.Action("GetData"));
return Json("OK");
}
public void InvalidateCacheItem(string url)
{
Response.RemoveOutputCacheItem(url); // invalidate on server
Response.AddHeader("X-Invalidate-Cache-Item", url); // invalidate on client
}
[OutputCache(Duration = 300)]
public JsonResult GetData()
{
return Json(new
{
LastModified = DateTime.Now.ToString()
}, JsonRequestBehavior.AllowGet);
}

At this point, the server is emitting a trigger to the HTTP client that says that “as a result of a recent operation, that other URL, the one for GetData, is no longer valid for your current cache, if you have one”. The header alone can be handled by different client implementations (or proxies) in different ways. I didn’t come across any “standard” HTTP response that does this “officially”, so I’ll come up with a convention here.

image

Now we need to handle this on the client.

Before I do anything first of all I need to refactor the existing AJAX functionality on the client so that instead of using $.getJSON, I might use $.ajax or some other flexible XHR handler, and wrap it all in custom functions such as httpGET()/httpPOST() and handleResponse().

var httpGET = function(url, data, callback) {
return httpAction(url, data, callback, "GET");
};
var httpPOST = function (url, data, callback) {
return httpAction(url, data, callback, "POST");
};
var httpAction = function(url, data, callback, method) {
url = cachebust(url);
if (typeof(data) === "function") {
callback = data;
data = null;
}
$.ajax(url, {
data: data,
type: "GET",
success: function(responsedata, status, xhr) {
handleResponse(responsedata, status, xhr, callback);
}
});
};
var handleResponse = function (data, status, xhr, callback) {
handleInvalidationFlags(xhr);
callback.call(this, data, status, xhr);
};
function handleInvalidationFlags(xhr) {
// not yet implemented
};
function cachebust(url) {
// not yet implemented
return url;
};
// application logic
httpGET($APPROOT + "Home/GetData", function(o) {
$('#results').text("Last modified: " + o.LastModified);
});
$('#reload').on('click', function() {
window.location.reload();
});
$('#invalidate').on('click', function() {
httpPOST($APPROOT + "Home/Invalidate", function (o) {
window.location.reload();
});
});

At this point we’re not doing anything yet, we’ve just broken up the HTTP/XHR functionality into wrapper functions that we can now modify to manipulate the request and to deal with the invalidation flag in the response. Now all our work will be in handleInvalidationFlags() for capturing that new header we just emitted from the server, and cachebust() for hijacking the URLs of future requests.

To deal with the invalidation flag in the response, we need to detect that the header is there, and add the cached item to a cached data set that can be stored locally in the browser with web storage. The best place to put this cached data set is in sessionStorage, which is supported by all current browsers. Putting it in a session cookie (a cookie with no expiration flag) works but is less ideal because it adds to the payload of all HTTP requests. Putting it in localStorage is less ideal because we do want the invalidation flag(s) to go away when the browser session ends, because that’s when the original browser cache will expire anyway. There is one caveat to sessionStorage: if a user opens a new tab or window, the browser will drop the sessionStorage in that new tab or window, but may reuse the browser cache. The only workaround I know of at the moment is to use localStorage (permanently retaining the invalidation flags) or a session cookie. In our case, we used a session cookie.

Note also that IIS is case-insensitive on URI paths, but HTTP itself is not, and therefore browser caches will not be. We will need to ignore case when matching URLs with cache invalidation flags.

Here is a more or less complete client-side implementation that seems to work in my initial test for this blog entry.

function handleInvalidationFlags(xhr) {
// capture HTTP header
var invalidatedItemsHeader = xhr.getResponseHeader("X-Invalidate-Cache-Item");
if (!invalidatedItemsHeader) return;
invalidatedItemsHeader = invalidatedItemsHeader.split(';');
// get invalidation flags from session storage
var invalidatedItems = sessionStorage.getItem("invalidated-cache-items");
invalidatedItems = invalidatedItems ? JSON.parse(invalidatedItems) : {};
// update invalidation flags data set
for (var i in invalidatedItemsHeader) {
invalidatedItems[prepurl(invalidatedItemsHeader[i])] = Date.now();
}
// store revised invalidation flags data set back into session storage
sessionStorage.setItem("invalidated-cache-items", JSON.stringify(invalidatedItems));
}
// since we're using IIS/ASP.NET which ignores case on the path, we need a function to force lower-case on the path
function prepurl(u) {
return u.split('?')[0].toLowerCase() + (u.indexOf("?") > -1 ? "?" + u.split('?')[1] : "");
}
function cachebust(url) {
// get invalidation flags from session storage
var invalidatedItems = sessionStorage.getItem("invalidated-cache-items");
invalidatedItems = invalidatedItems ? JSON.parse(invalidatedItems) : {};
// if item match, return concatonated URL
var invalidated = invalidatedItems[prepurl(url)];
if (invalidated) {
return url + (url.indexOf("?") > -1 ? "&" : "?") + "_nocache=" + invalidated;
}
// no match; return unmodified
return url;
}

Note that the date/time value of when the invalidation occurred is permanently stored as the concatenation value. This allows the data to remain cached, just updated to that point in time. If invalidation occurs again, that concatenation value is revised to the new date/time.

Running this now, after invalidation is triggered by the server, the subsequent request of data is appended with a cache-buster querystring field.

image

 

In Summary, ..

.. a consistent approach to client-side cache invalidation triggered by the server might be by following these steps.

  1. Use X-Invalidate-Cache-Item as an HTTP response header to flag potentially cached URLs as expired. You might consider using a semicolon-delimited response to list multiple items. (Do not URI-encode the semicolon when using it as a URI list delimiter.) Semicolon is a reserved/invalid character in URI and is a valid delimiter in HTTP headers, so this is valid.
  2. Someday, browsers might support this HTTP response header by automatically invalidating browser cache items declared in this header, which would be awesome. In the mean time ...
  3. Capture these flags on the client into a data set, and store the data set into session storage in the format:
    		{
    	"http://url.com/route/action": (date_value_of_invalidation_flag),
    	"http://url.com/route/action/2": (date_value_of_invalidation_flag)
    	}
    	
  4. Hijack all XHR requests so that the URL is appropriately appended with cachebusting querystring parameter if the URL was found in the invalidation flags data set, i.e. http://url.com/route/action becomes something like http://url.com/route/action?_nocache=(date_value_of_invalidation_flag), being sure to hijack only the XHR request and not any logic that generated the URL in the first place.
  5. Remember that IIS and ASP.NET by default convention ignore case (“/Route/Action” == “/route/action”) on the path, but the HTTP specification does not and therefore the browser cache bucket will not ignore case. Force all URL checks for invalidation flags to be case-insensitive to the left of the querystring (if there is a querystring, otherwise for the entire URL).
  6. Make sure the AJAX requests’ querystring parameters are in consistent order. Changing the sequential order of parameters may be handled the same on the server but will be cached differently on the client.
  7. These steps are for “pull”-based XHR-driven invalidation flags being pulled from the server via XHR. For “push”-based invalidation triggered by the server, consider using something like a SignalR channel or hub to maintain an open channel of communication using WebSockets or long polling. Server application logic can then invoke this channel or hub to send an invalidation flag to the client or to all clients.
  8. On the client side, an invalidation flag “push” triggered in #7 above, for which #1 and #2 above would no longer apply, can still utilize #3 through #6.

You can download the project I used for this blog entry here: ClientSideCacheInvalidation.zip

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

ASP.NET | C# | Javascript | Techniques | Web Development

ASP.NET MVC 4: Where Have All The Global.asax Routes Gone?

by Jon Davis 23. June 2012 03:03

I ran into this a few days back and had been meaning to blog about it, so here it finally is while it’s still interesting information.

In ASP.NET MVC 1.0, 2.0, and 3.0, routes are defined in the Global.asax.cs file in a method called RegisterRoutes(..).

mvc3_register_routes

It had become an almost unconscious navigate-and-click routine for me to open Global.asax.cs up to diagnose routing errors and to introduce new routes. So upon starting a new ASP.NET MVC 4 application with Visual Studio 11 RC (or Visual Studio 2012 RC, whichever it will be called), it took me by surprise to find that the RegisterRoutes method is no longer defined there. In fact, the MvcApplication class defined Global.asax.cs contains only 8 lines of code! I panicked when I saw this. Where do I edit my routes?!

mvc4_globalasax

What kept me befuddled for far too long (quite a bit longer than a couple seconds, shame on me!) was the fact that these lines of code, when not actually read and only glanced at, look similar to the Application_Start() from the previous iteration of ASP.NET MVC:

mvc3_globalasax

Eventually I squinted and paid closer attention to the difference, and then I realized that the RegisterRoutes(..) method is being invoked still but it is managed in a separate configuration class. Is this class an application settings class? Is it a POCO class? A wrapper class for a web.config setting? Before I knew it I was already right-clicking on RegisterRoutes and choosing Go To Definition ..

mvc4_globalasax_gotodef

Under Tools –> Options –> Projects and Solutions –> General I have Track Active Item in Solution Explorer enabled, so upon right-clicking an object member reference in code and choosing “Go To Definition” I always glance over at Solution Explorer to see where it navigates to in the tree. This is where I immediately found the new config files:

mvc4_app_start_solex

.. in a new App_Start folder, which contains FilterConfig.cs, RouteConfig.cs, and BundleConfig.cs, as named by the invoking code in Global.asax.cs. And to answer my own question, these are POCO classes, each with a static method (i.e. RegisterRoutes).

I like this change. It’s a minor refactoring that cleans up code. I don’t understand the naming convention of App_Start, though. It seems like it should be called “Config” or something, or else Global.asax.cs should be moved into App_Start as well since Application_Start() lives in Global.asax.cs. But whatever. Maintaining configuration details in one big Global.asax.cs file gets to be a bit of a pain sometimes especially in growing projects so I’m very glad that such configuration details are now tucked away in their own dedicated spaces.

I am curious but have not yet checked to determine whether App_Start as a new ASP.NET folder has any inherent behaviors associated with it, such as for example post-edit auto-compilation. I’m doubtful.

In future blog post(s), perhaps my next post, I’ll go over some of the other changes in ASP.NET MVC 4.

Currently rated 3.8 by 27 people

  • Currently 3.814815/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

ASP.NET | Web Development

Microsoft: We’re Not Stupid

by Jon Davis 2. May 2010 23:40

We get it, Microsoft. You want us to use Azure. You want us to build highly scalable software that will run in the cloud—your cloud. And we’ll get the wonderful support from Microsoft if we choose Azure. Yes, Microsoft. We get it.

We’re not stupid. You play up Azure like it’s a development skill, or some critical piece of the Visual Studio development puzzle, but we recognize that Azure is a proprietary cloud service that you’re advertising, not an essential tool chain component. Now go away. Please. Stop diluting the MSDN Magazine articles and the msdev Facebook app status posts with your marketing drivel about Azure. You are not going to get any checks written out from me for hosted solutions. We know that you want to profit from this. Heck, we even believe it might actually be a half-decent hosting service. But, Microsoft, you didn’t invent the cloud, there are other clouds out there, so tooling for your operating system using Visual Studio does not mean that I need to know diddly squat about your tirelessly hyped service.

There are a lot of other things you can talk about and still make a buck off of your platform. You can talk about how cool WPF is as a great way to build innovative Windows-only products. You can focus on how fast SQL Server 2008 R2 is and how Oracle wasted their money on a joke (mySQL). You can play up the wonderful extensibility of IIS 7 and all the neat kinds of innovative networked software you can build with it. Honestly, I don’t even know what you should talk about because you’re the ones who know the info, not me.

But Microsoft it’s getting really boring to hear the constant hyping of Azure. I’ve already chosen how my stuff will be hosted, and that’s not going to change right now. So honestly, I really don’t care.

Maybe I need to explain why I don’t care.

Microsoft, there are only two groups of people who are going to choose your ridiculously wonderful and bloated cloud: established mid-market businesses with money to spend, and start-ups with a lot of throw-away capital who drank your kool aid. You shouldn’t worry about those people. The people you should worry about are those who will choose against it, and will have made their decision firmly.

First of all, I believe most enterprises will not want to put their data on a cloud, certainly not with a standardized set of cloud interfaces. It’s too great a security risk. Amazon’s true OS cloud is enticing because companies can roll their own APIs with proprietary APIs and have them talk to each other while rolling out VM instances on a whim. They have sufficient tooling outside of cloud-speak to write what they need and to do what needs doing. But for the most part, companies want to keep internal data internal.

Second, we geeks don’t fiddle a whole lot with accounting and taking corporate risks. We focus on writing code. That code has to be portable. It has to run locally, on a dedicated IIS server, or in a cloud. Writing code that deploys to your cloud—whether a true cloud or locally for testing, it doesn’t matter—if it doesn’t run equally well in other environments it’s at best a redundant effort and at worst a potentially wasted one. We have to write code for your cloud and then we have to write code for running without your cloud. We most certainly would not be comfortable writing code that only runs on your cloud, but the mangled way your cloud APIs are marketed we might as well bet the whole farm on it. And that just ain’t right.

See, I don’t like going into anything not knowing I can pull out and look at alternatives at any time without having completely wasted my efforts. If I’m going to write code for Azure, I want to be assured that the code will have the same functionality outside of Azure. But since Azure APIs only run in the Azure cloud, and Azure cannot be self-hosted (outside of localhost debugging), I don’t have that assurance. Hence, I as a geek and as an entrepreneur have no interest in Azure.

When I choose a tool chain, I choose it for its toolset, not for its target environment. I already know that Windows Server with IIS is adequate for the scale of runtimes I work with. When I choose a hosting service, I choose it expecting to be very low-demand but with potential for high demand. I don’t want to pay out the nose for that potential. I often experiment with different solutions and discover a site’s market potential. But I don’t go in expecting to make a big buck—I only go in hoping to.

What would gain my interest in Azure? Pretty much the only thing that would have me give Azure even a second glance would be a low-demand (low traffic, low CPU, low storage, and low memory) absolutely free account, whereby I am simply billed if I go over my limit. If free’s no good, then a flat ridiculously low rate, like $10/mo for reasonable usage and a reasonable rate when I go over. A trial is unacceptable. I’m not going to develop for something that is going to only be a trial. And I also prefer a reasonable flat rate for low-demand usage over a generated-per-use one. I prefer to have an up-front idea of how much things will cost. I don’t have time to keep adjusting my budget. I don’t want to have to get billed first in order to see what my monthly cost will be.

I’m actually paying quite a bit of money for my Windows 2008 VPS, but the nice thing about it is there are no surprises, the server will handle the load, and if I ever exceed its capacity I can just get another account. Whereas, cloud == surprises. You have to do a lot of manual number crunching in order to determine what your bill is going to look like. “Got a lot of traffic this month? We got you covered, we automatically scaled for you. Now here’s your massive bill!”

Let’s put it this way, Microsoft. If you keep pushing Azure at me, I can abandon your tool chain completely and stick with my other $12/mo Linux VM that would meet my needs for a low-demand server on which I still have the support of some magnificent open source communities, and if my needs grow I can always instance another $12/mo account. Honestly, the more diluted the developer discussions are with Azure hype, the more inclined I am to go down that path. (Although, I’ll admit it’ll take a lot more to get me to go all the way with that.)

Just stop, please. I have no problem with Azure, you can put your banner ads and printed ads into everything I touch, I’m totally fine with that. What is really upsetting to me is when magazine and related content, both online and printed, is taken up to hype your proprietary cloud services, and I really feel like I’m getting robbed as an MSDN subscriber.

Just keep in mind, we’re not stupid. We do know service marketing versus helpful development tips when we see it. You’re only hurting yourselves when you push the platform on us like we’re lemmings. Speaking for myself, I’m starting to dread what should have been a wonderful year of 2010 with the evolution of the Microsoft tool chain.

 

[UPDATE: According to this, Azure will someday be independently hostable. That's better. Now I might start paying attention.] 

Currently rated 3.7 by 3 people

  • Currently 3.666667/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

Peeves | Software Development | Web Development

Microsoft WebsiteSpark Now Negates The Cost For Web Devs and Designers

by Jon Davis 26. September 2009 18:52

A couple weeks ago I posted a blog entry ("Is The Microsoft Stack Really More Expensive?") describing the financial barrier to entry for building software--particularly web apps--on the Microsoft platform. The conclusion was that the cost is likely to be nil if you're a) willing to settle for the Express products and everything else bundled in the Microsoft Web Platform Installer (which includes a slew of open source ASP.NET and PHP web apps to start you off), b) starting a software company, c) a student, or d) an employee of a company willing to foot the bill for an MSDN license (to you personally, not to your team).

Well, Microsoft just created yet another program, for those of you who are e) building or designing web sites. (Sweeeeet!!) For a $100 offing fee (a fee that you pay when your license ends, rather than when it begins) you get Windows Web Server 2008 R2, SQL Server 2008 Web Edition, Visual Studio 2008 Professional Edition, and Expression Studio 3.

Not bad! Although, your license ends in three (3) years (same as BizSpark).

Link: Microsoft WebsiteSpark

CSS For Layouts: Meh, I'll Stick With Tables, Thanks

by Jon Davis 24. September 2009 11:09

I tend to agree with the sentiments posted here:

http://www.flownet.com/ron/css-rant.html [followups here and here].

I've wasted WAY too many aggregate hours trying to nudge DIVs into place using half-baked CSS semantics that work on all browsers without workarounds, where the simple use of a <TABLE> tag would have sufficed just fine. Even this blog's sidebar does not behave the way I wanted it to behave despite spending some time trying and failing to get it to behave according to my preferences because I chose CSS instead of <TABLE> and, frankly, CSS sucks for some things such as this. With <TABLE> I can say simply <TABLE WIDTH="100%"><TR><TD>..</TD><TD WIDTH="300"> and boom I have a perfect sidebar with fixed width and a fluid content body with ABSOLUTELY NO CSS TO HAVE TO MANGLE other than disabling the default HTML rendering behavior of borders and spacing.

Ron comments,

Another common thread is that "tables are for tabular data, not layout." Why? Just because they are called tables? Here's a news flash: HTML has no semantics beyond how it is rendered! (That's not quite true. Links have semantics beyond their renderings. And maybe label tags. But nothing else in HTML does.)

The only reservation I have in favor of DIVs instead of TABLEs is when it gets down to nesting. Deeply nested <TABLE>'s can get really, really ugly. I think this is where all the hatred of <TABLE>s comes from, which I agree with.

I've reached the conclusion that if I can use <DIV>'s effectively (and quickly) and the behavior is predictable, I'll use DIVs. But same with TABLEs. I have yet to hear a CSS purist describe a logical reason to use DIVs+CSS over TABLEs for overall page structure. It all seems to be cognative dissonance and personal bias.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , ,

Web Development


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  December 2019  >>
MoTuWeThFrSaSu
2526272829301
2345678
9101112131415
16171819202122
23242526272829
303112345

View posts in large calendar