How do you manage focus in ramp-up?

by Jon Davis 21. October 2019 01:39

Just a brief question. How do you manage to maintain focus in ramp-up (technical self-training)? One of my great deficiencies over my career has occasionally been shown to be a failure to have deep understanding of some of the tech stack components I claim to support. And the root cause of this is that when the time came that I needed to learn up on my stuff, I got ADHD of some sort, and never followed through with deep diving and getting to the end.

22 years in the field, I've seen a lot of action and a lot of evolution of technology and watched a lot of technology trends come and go. Every new thing I learn makes me reminisce about something else I knew, or something else I heard about, or wonder what happened to that one guy who showed me how he dabbled in that such and such ...

I'll read a paragraph from a tutorial, and nitpick specific details, and ask questions about those details, and find myself spending thirty minutes researching those tangents. Eventually I'll realize I got off on a tangent--but not before taking a mental break and finding some other music to listen to, then researching what happened to that old musical artist I used to love years ago, or whatever--and then finally return to the same paragraph to finally implement the detail that the tutorial is telling me to do.

Rinse, repeat, for every. single. paragraph. As a result, twenty-minute tutorials take me two or three days to complete, or some ten or so hours. Similarly for video; I am constantly having to rewind because I think too hard on each point; a two hour video can take days (many hours) to wade through. I'm more or less enriched for all this, but it makes me a less competent professional when I put in equal time as my peers--I have to dedicate so much more time to the craft than they do, I think.

When I know my stuff I am pretty good at it, but I am floored by the depth and resolve and detail that my peers are able to demonstrate when they lay out their abilities and put them on full display. Yes, it makes me jealous. And frustrated.

I'm serious. Is there a particular kind of psychologist or other professional I should be talking to? Maybe there is some "focus pill" I can take.

Are there simple tactics or mind tricks you use to keep the ball rolling and on track?

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Prelude to a Blogging Reset

by Jon Davis 19. October 2019 22:55

I started blogging in around 2002/2003 when Blogger and Radio UserLand were inspirations for me to create a desktop blogging app I called PowerBlog. 

Archive.org: https://web.archive.org/web/20031010203956/http://powerblog.net/
PowerBlog (beta release)

My motivations were three:

  1. To grow my skill set as a software developer.
  2. To see if I could maybe make a living building and selling home-grown software. (PowerBlog was available for sale, but didn't sell.)
  3. To write, and write, and write. My brain was churning, and I wanted to divulge my thoughts out loud. 

PowerBlog was written in VB6. It consisted of an instance of the Internet Explorer WebBrowser COM object to drive a true WYSIWYG experience, design philosophies similar to those of Outlook Express (an e-mail and newsgroup client), and a publish mechanism component model including support for user-defined publishing scripts using the Active Scripting (VBScript/JScript) runtime component. When .NET v1.1 came out, I rewrote PowerBlog, reusing the same components from Microsoft but adding more features like XML-RPC--by which I built my own client/server libraries with self-documenting HTML output similar to ASP.NET's .asmx behavior--and style synchronization so that the user could edit in the same CSS style as the page itself. I tried to sell PowerBlog for $10 to $20 per download. It didn't sell. That's kind of okay, the value of my efforts was found in my skills ramp-up and in my having a tool for my own writing.

PowerBlog got Microsoft's attention. Three things happened, all of a sudden, with Microsoft: 1) the Windows Live initiative kicked off blogging from their web site (which they've since shuttered), 2) Microsoft interviewed me, and 3) Windows Live Writer was written, stealing most of the good ideas I had in PowerBlog (admittedly tidbits of the ideas of PowerBlog were stolen from another app called W.bloggar). I got nothin for it but some bragging rights for the ideas. 

I was broke. My motivators to go out on my own (taxes-related need to go offline to do some research) subsided. So I stopped development and went back out into the workforce. When .NET 2.0 came out along with a new version of Internet Explorer, it broke PowerBlog permanently. Microsoft published WebBrowser components that directly collided with the WebBrowser components I had created for PowerBlog. So PowerBlog isn't available today--it will just crash for you if you try--and I couldn't even build my own source code anymore, without finding an old Windows XP box with an old version of Visual Studio and .NET Framework 1.1\. It just wasn't worth the hassle, so I never bothered. Maintenance was immediately halted. PowerBlog.net went offline. I didn't much care. I did need to keep blogging and writing, so eventually I found BlogEngine.net and transferred my technical blog content to it. I updated the blog engine code once or twice as it evolved, but eventually I just let it sit dormant.

So that's where my tech blog instance stands today. http://www.jondavis.net/techblog/ runs on a decade old version of BlogEngine.net which I've tailored a bit to include banned IPs and some home-grown CAPTCHA functionality for the comments. I've since looked at a few other blogging engines over the years I've considered adopting, most notably I remember looking at NBlog, which gave me the most control but required me to implement it in code and didn't sufficiently demonstrate black box modularity, and Ghost, which would grow me in NodeJS but ticked me off in its stance on only using Markdown and never WYSIWYG HTML editing. 

Meanwhile, the whole time, developers and non-developers alike have pooh-poohed the interestingness of blogs, blogging software, blogging strategies, blogging services, etc. I can certainly see why. At its most basic level, blogs are little more than single-table CRUD apps. You have an index of recent blogs, you have a detail view of an individual blog post, you have an editor and creator page, and you can delete a blog post. The content of a blog post is, to the casual observer, nothing but four components: title, body, author, and publish date. If that's what you think a blog is, you're naive, and likely didn't bother reading this far.

At the very least, on the technical side, blogging brought about a few innovations that revolutionized the Internet, including: 

  • XML-RPC (SOAP's midget predecessor)
  • RSS
  • Publish pings
  • Atom
  • OPML
Not to mention less specific innovations, like end user accessible management of slugs (URL paths) for enhancing SEO. And the whole notion of blogging became the basis of the World Wide Web's notion of community interconnectivity. The broad set of Web sites was social networking, before social networks took over social networking. Blog sites were known, and linked to each other. Followers of blogs created view counts that filled the hole that Facebook post likes fill today. 
 
This is perhaps why dev.to is such a success story; it's like a Facebook group, for coders, where every single post is a blog article.
 
Anyway, so now I sit here typing into the ten year old deployment of BlogEngine.net on my resurrected tech blog, wondering what my next steps should be. Well, I'm pretty sure I want to create my own blog engine again, and build on it. Yes, it's a two decades stale idea, but here are my motives:
 
  • My blog philosophies still differ somewhat from the offerings currently available (although not far at all from BlogEngine.net)
  • I want to keep growing my technical skill set (not just write about them). There are revised web and software standards I need to freshen up on, and apply my learnings. Creating a blog site has always been a good mechanism to sample and apply the most basic web tech skills.
  • I need anything I deploy to a technical blog to portray my own personal portfolio, including the site on which my writings are displayed
  • I still like the idea of taking something I myself created and putting it out on github as a complete package and perhaps managing a hosted instance of it, similar to WordPress.
All this said, I don't think I'll be doing a full-blown BlogEngine.net equivalence. It will be a simple thing, and from that I'll take some ideas to glean from and apply them to other software efforts I'll be working on. 
 
But a few things I will point out about the technical strategies and approaches I intend to implement:
  1. Rich front-end libs (React, Vue, Blazor, etc)?? While I might take, say, Ghost's approach in having some rich interactivity for the author's experience, the end user (reader) experience cannot be forced to utilize them. A blog page must be rich in HTML semantics and index well SEO-wise for the web at large, so the HTTP response payload cannot be Javascript stub placeholders for dynamic content.
  2. Componentization is more important than anything else. I do intend to iterate over a lot of the features this old instance of BlogEngine.net has as well as current blog engines, and figure out ways to spoonfeed those features without embedding them deeploy in interdependencies. Worst thing I could do is embed all the bells and whistles directly into ASP.NET Razor Pages themselves. I intend to apply decorator pattern principles, perhaps applying sidecar architecture for some components where appropriate. Right now I'm looking at using Identity Server 4, for example, as a sidecar architecture for author identity and privileges. Speaking of which,
  3. Everything must be readily run-anywhere, black-box-capable deployable as ad hoc code launched with self-hosting, Azure web app service (PaaS), Docker-contained, or VM (IaaS). I also intend to set up a hosted instance and offer limited blog hosting for free, extended features for a fee. In this sense, WordPress is a competing platform.
  4. "Extended features, like ..?"  Premium templates. Large storage hosting. And premium add-on features yet to be defined.
  5. The templates can be based on Razor, but "pages" cannot be tightly defined. If this platform grows up, it needs to be a more than a blog, it needs to be a mini-CMS (content management system).
  6. Social networking ecosystem is a mandatory consideration; full blogging must integrate with microblogging and multi-contributor cross-pollination including across disparate sites and systems; more on this later, but imagine trusted Twitter users liking Facebook posts.
Again, yes, I am unpacking a two decades old solution proposal to a Web 1.0 / 2.0 problem, but that problem never really went away, it just became less prominent. I really don't anticipate this blowing up to being much of anything. At a minimum, I need to convert all of my content here on this technical blog (at http://www.jondavis.net/techblog/) to a home-grown platform of my own making, even if after doing that I call it done and walk away.
 
Another motivator for getting that done is the fact that just writing this, I'm writing with very tiny text on a very high resolution screen, and you're probably squinting reading this, too, if you're reading it from my web site, and if you actually made it this far. Again, yes, I could simply just replace the template and maybe refresh the version of BlogEngine.net which I'm using. But, why do that, when I call myself a 23-years-plus experienced web development veteran and could just build my own? Am I really so useless in my aging as to be unable to build something of my own again? Not like a blog engine will get me a better job, duh, so much as the fact that using someone else's engine just plain looks bad on me. So, this is a lightweight test of mettle. I need to do this because I need to.
 
On a final note, I'll mention again what I'm focusing on first: "who am I"--authentication and authorization. I'm learning up on Identity Server 4. I'm still new to it, so this will take some time. Right now it looks like the black box version of my blog engine will come with an instance of Identity Server 4, based perhaps on ASP.NET Core Identity as the user store; developers can tweak it out to their heart's content. I'm still mulling over whether to just embed it into the blog app (underkill? too much in one?) or separate it out as a separate SSO-oriented web app (overkill? too many distributed parts for a mere blog?), but at this point I'll likely do the former for the black box download (source code included) and the latter for the hosted instance which I hope to set up and offer to people wordpress.com-style.

 

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Blog | General Technology | Web Development

Time to blow off some dust, toot my own horn, and hit the books

by Jon Davis 14. October 2019 23:59

Woah, Nelly!

Jon Davis's blog is back up! Last post was .. more than three years ago! Daaaang! How ya'll been?! Missed ya'll! Miss me?

I'm just posting a quick update here to let ya'll know that the gears in my tech world have started turning again, and to update ya'll on what's going on.

You may or may not have noticed that my root web site http://jondavis.net/ went through a couple phases and got stuck with that dorky "please wait" console-ish page. I was supposed to have replaced it by *cough* .. 2016. It's now approaching the year 2020. What happened? Well what happened was I got a few kicks in the pants. Got myself into some really humbling situations, not the fun kind. Perhaps you might say I got phased out. But I never left, so now I'm in high gear phasing back in.  I might try to explain ...

A few blog entries ago (this goes back to 2014) I wrote about how I needed to reset things, and how I needed to mull over what was going on at work. So here's a recap on my career path lately up till my previous job:

 

  1. In 2012 I was hired at Neudesic, a prominent consultancy firm. I dreamed of surrounding myself with smart people I could glean from. It turned out that things were pretty erratic at Neudesic. My first project assignment the client was Microsoft themselves. Prior to my joining they had just canned a project to build a big, beautiful Windows 8 themed web site where developers and other staff would post all-important articles and index them with Lucene.NET. Architecturally it was a disaster, and they canned the Microsoft executive at the same time they canned the project. So my joining the team, they were trying to salvage that project. I worked with them to try to dig into the code, get it up and running, figure out the performance issues, etc., and at some point I flew out to Redmond, Washington to discuss with the local Neudesic managers who were interfacing with Microsoft in person. Then I opened my mouth. I said, "Ya'll are really just makin' a blog. Why not add blog-ish things? Why reimplement Sharepoint?" Suddenly we were all fired. So then they shipped me off to Pulte Homes, where I did some work there, but the architect for that project and I didn't seem to know what we were doing on the user authorization detail and I proved blunt enough that they "couldn't afford" to keep me on after a couple months. So then they shipped me off to California to work with Ward & Brown on the Obamacare / ACA implementation there, again late to the party and annoyingly insistent to "help". But when QA and I asked around where the production/staging servers were, and they realized they apparently missed that part, they canned all their contractors. All 250 of us. So there I was stuck in the Neudesic-Phoenix office waiting for them to call me in. And they did. They laid me off. Like the child-man that I was, I accidentally let them catch me choking. They did say they might be able to take me back in a few months, though, so ...
  2. For the next few months I did some contract work for a local entrepeneur. I waited that out for a while and ultimately I wanted to go back to Neudesic, so ...
  3. In mid-year 2013 I asked Neudesic to take me back in. They did. They brought me back with open arms. It was more awkward for the lot of them than I or anyone anticipated. I'd made myself a reputation for being unrefined up to that point, I'm pretty sure. But anyway, when I arrived (a second time) I noticed that more people, including my former manager, were no longer there. They were laid off too. And I shipped back to Pulte Homes, on another, bigger project. I was up front with them, though, about what specific tools I was unacquainted with, specifically Bootstrap and the like, at the time. They shrugged that off, brought me in anyway. That project was led by--ima be frank here--a really, REALLY bitchy, control freakish, PMS-y woman. I tried to overlook it at the time. I tried to pretend it wasn't so at the time. I'm thinking back half a decade now, and I'm saying it like how I witnessed it. She was horrible. If I ever get in a situation like that again I'll run for the hills. That was awful. She was awful. But anyway, at the time, overlooking that (which I deliberately did), since the lead personality who'd previously been on that project but was laid off was now gone, and I wasn't getting much in the way of introduction, I deliberately, if bombastically, made a ton of assertions and at the same time asked a lot of really stupid, ignorant questions, which ticked off everyone on the team. So they canned me from the team. So that was fun. But what really did me in was I overheard the Neudesic chief developer director guy tell my boss that I "shouldn't be writing software". They assigned me some out-of-state ops role. I should have quit then and there; sure, I screwed up, but I'm a developer, not an ops guy. I didn't last long, and I ultimately did resign.
  4. So now in 2014 I got a very odd and delightfully educational gig with DeMark Analytics. So first of all, Tom DeMark, the owner, is a wealthy financial technician (a term I never knew before) who did what I imagined doing some years back--he studied financial charts and came up with artificially intelligent algorithms that predicted changes in the market. He's one among many people who come up with this stuff, but even so, that's what he did. So now they were building a product for people who could use these financial chart studies--a charting app, basically, with studies being overlaid over the chart. This entailed financial event data being siphoned in to both the back-end engine as well as to the client (the web browser). This is high tech stuff, and the pattern in use was CQRS-ES, which I'd never heard of much less mastered. So here's where it all broke down: 1) We all sat in close proximity to each other. No privacy. All conversations fully exposed. 2) I was hired as a lead, but I was wrong a couple times when I was adament, it was observed by the teammates, and so my authoritative experience as a lead developer was never trusted again. 3) Everyone in the company but maybe four of us was either family to Tom DeMark or close friends of Tom DeMark. It was a nepositic environment. I worked under one of Tom's sons (he was CEO/President), and another of Tom's sons worked under me--well, alongside me, since me being lead didn't work out. He was a bit spoiled. Quick on the Javascript, though, I couldn't keep up with his code, which was charting graphics. Like, at all. So I stopped trying. 4) My pay was adequate (best I can say about it) but my paycheck was cut monthly. Weird, bizarre, and painful. But it was worth it, I was investing in this, I figured. 5) My direct boss was a super high IQ ass. I asked technical questions about the middle/back end from him, like about how Cassandra was to be used, and instead of answering, he'd call everyone over--the CSS front-end developers, everyone. Then ask me to ask my question again. All to I guess humble me? .. Show me that this was "duh" knowledge that everyone would know and understand? So where this went critical was 6) I again used rhetoric. Crap, man. This pretty much got me fired. I used rhetoric! What I mean is, I insinuated, in so many words, that "your explanation sounds vague, and if I didn't know any better I'd say it almost sounds like you said [something obviously stupid]". Except I didn't say it like that, I actually, literally said, ".. you mean [something obviously stupid]?" I really, truly, genuinely trusted that he understood that I was being rhetorical, but this was the second time I made that mistake--I made the same mistake at Neudesic, "I can't read that [Javascript code]", I could, but I was getting at I shouldn't have to stop and read it--and with these guys it was sabotage. A few minutes later after we returned to our desks he threw his hands up and announced he was switching to Java/Scala. "Sorry Jon." Sorry, Jon? So yeah he was basically saying "you're fired, please quit." It took a couple more months, but I eventually did.
  5. So then I work at another consultancy firm. Solution Stream. Utah-based. They seemed to be trying to spread out into Phoenix. "I can do this," I thought. I learned at Best Software (Sage Software) when I moved to Arizona how consultants--consultants, not contractors--work, and bring prestige to the process of coming up with technical solutions and strategies, documenting them, and working them with the clients. But Solution Stream was really primarily just interested in creating contractors, apparently, but regardless, they didn't appreciate what I brought to the table, they undersold my capabilities, the executives decided they didn't like me, and they literally pulled me off a project that the client and my direct boss said I was doing great with and signed me over to Banner Health as a temp-to-hire (fire). I had no interest in being hired as a permanent Banner Health employee. When my temp-to-hire contract ended (the "temp-" part), everything ended. I swore off all consulting firms at that point. No more consulting firms. Never again.
  6. So then I got picked up at InEight. Bought out by major construction company Kiewit, InEight was building a cloud version of their Hard Dollar desktop app which manages large scale construction logistics (vendors, materials, supplies, etc). They had some workable plans and ideas. But things broke down real fast. Everyone who interviewed me quit within the year I joined, and it wasn't hard to see why. 1) They had non-technical people at the helm (senior leadership) making some very expensive and frustrating platform and architecture decisions. High performance software with minimal performance hosting, nothing worked, because they didn't want to spend the money for scaling the web tier. 2) The work was outsourced. Most of it was outsourced to India. Eventually they shipped some of it onshore to another midwestern state. But even onshore, most of their staff were H1-B visa holders. Foreigners. Nearly everyone I was working with was from India. Even after so many people quit, I stuck around as long as I could. But eventually I couldn't stand things anymore, my career path had become stagnant, and I knew I couldn't work with the senior executives (no one could, except people from India I guess). I was about to give two weeks notice, when those senior execs pulled me into a room and chewed me out for being "disrespectful". I was done. Never saw them or anyone over there again. (Actually, that's not entirely true; I have maintained strong friendship with at least one colleague from there. He got laid off a few months after I left. We're friends; we literally just met up this week.)
  7. For the last year and a half I've been working ... somewhere. For now. I came in as a lead developer, but they, too, openly declared me "disrespectful", so I've given up and just been a highly productive, proficient, heads-down programmer. This place, too, is mostly H1-B visa holders from India. I'm surrounded by foreigners. Scarcely a black, white, or Mexican face. It's depressing. I have less and less each year against Indians but my God, let me work with people of my own culture if I'm here in USA, just a few like-minded, like-raised friends, just a few? And now my team is getting dismantled, due to a third party taking over what we're doing. So I'm about to get laid off. If I'm not laid off, though, well, ... 1) as a contractor, I don't get paid holidays, I don't get paid vacations, and that was painful enough, but unexpectedly after my hire it turns out there is a mandatory two weeks unpaid leave during Christmas & New Year's, and that's unacceptable ($thousands of $dollars lost, not to mention depressing since I spend holidays alone, so yes, it's unacceptable). 2) I have been super comfortable, and super complacent, with little to gain in technical growth. It's been ASP.NET MVC with SQL Server and jQuery. And some .NET Core 2.2 and Razor Pages. Woo wee. *sigh* So yeah, I'm open to change, regardless of whether I get laid off.
There's my life story for the last seven years. Stupid, depressing, awful, I've been awful, I've let myself screw myself over time and time again. So here is my new strategy.

 
I have no one to blame, even where I've whined and complained, I have no one to blame for my life's frustrations but myself. It's part of the maturing process. I've embraced my learnings and I will carry on. I will try to let go of the past, but I only repeat and document them here because I have learned from them, and perhaps you can, too. What do I want in my career path? I miss the days when I was an innovator.

You guys remember AJAX? Yeah? I dreamed up AJAX in 1998 when IE4 came out. I called it "TelnetGUI". Stupid name. Other people came up with the same idea a few years later and earned the credit.

You guys remember Windows Live Writer? Yeah? ... Total rip-off of my PowerBlog app, down to detail. You could say I prototyped Windows Live Writer before Microsoft started working on Windows Live Writer. Microsoft even interviewed me after I built PowerBlog, because of PowerBlog and its Microsoft-minded inspirations of component integration. Jerk interviewer was like, "Wait, you mean you don't know C++?! Oh good grief, I thought you were a real programmer." Screw you, Microsoft interviewer. LOL. Anyway, Windows Live Writer came out a couple years later. Took all of PowerBlog's fundamental ideas, even down to the gleaning the CSS theme and injecting the theme into the editor.
 
You guys remember PowerShell? Yeah? I prototyped the idea in or around 2004. I took the ActiveScript COM object, put it in a C++ console container, spoonfed some commands where you could new-up some objects and work with them in a command-line shell, suggested that the sky's the limit if you integrate full-blown .NET CLR and shell commands in this, and showed it to the world on Microsoft's newsgroups. Microsoft was watching; I planted a seed. A year or so later, PowerShell ("Monad") was previewed to the world. I didn't do the dirty work of development of it, but I seeded an idea.
 
You guys remember jQuery UI? Yeah? I cobbled together a windowing plugin for jQuery a year or two before jQuery UI was released. It was called jqDialogForms. Pretty nifty, I thought, but heck, I never got to use it in production.  
 
In fact there's a lot of crap in my attic I recently dug out and up over at https://github.com/stimpy77/ancient-legacy (It really is crap. Nothing much to see.)
 
And, oh yeah, you guys remember Entity Framework, Magical Unicorn Edition? I, too, had been inspired by Fluent NHibernate, and I, too, was working on an ORM library I called Gemli [src]. Sadly, I ended up with a recursion nightmare I myself stopped being able to read, development slowed to a halt, and then suddenly Microsoft announced that EF Magical Unicorn Edition, and I observed that it did everything I was trying to do in Gemli plus 99x more. So that was a waste of time. Even so, that was mini-ORM-of-my-own-making #2 or #3. 
 
All of these micro-innovations and others are years old, created during times of passion and egotistical self-perception of brilliance. What happened?! I think we can all see what happened. My ego kept bulldozing my career. My social ineptitude vanquished my opportunities. And I got really, really lazy on the tech side.
 
My blog grew stagnant because, frankly, career errors aside, my bold and lengthy philisophical assertions in my blog articles were pretty wrong. Philosophies like, "design top down, implement bottom up". Says who? Why? I dunno. Seemed like an interesting case to make at the time. But then people at meetups said they knew my name, read my blog, quoted my article, and I curled up and squealed and said "oh gawd I had no idea what I was writing". (Actually I just nodded my head with a smile and blushed.) 
 
For the last few weeks I have spent, including study time, more than 70 hours a week, working. Working on hard skills growth. Working on side project development--brainstorming, planning. Working on fixing patchy things, like getting this blog up, so I can get into writing again. It's overdue for a replacement, but frankly I might just switch over to http://dev.to/ like all the young cool kids. 
 
Tech Things I Am Paying Attention To 
 
.NET Core 3 is where it's at, .NET 5 is going to be The Great .NET Redux's great arrival. However, the JVM has had a huge comeback over the last half-decade, and NodeJS and npm like squirrelly cats have been sticking their noses in everything. Big client-side Javascript libraries from a year or two ago (Facebook's React, Google's Angular, China's Vue) are now server-side for some dumb reason. Most importantly, software is becoming event-driven. IaaS is gone. PaaS is passe. Kubernetes is now standard, apparently. Microsoft's MSMQ is so 1990s, RabbitMQ so 00's, LinkedIn's Kafka is apparently where it's at, and now Yahoo!'s Pulsar is gaining noteriety for being even more performant. 
 
My day job being standard transactional web dev with ASP.NET/jQuery/SQL has made me bewilderingly ansy. If I want to continue to be competitive in complex software architecture and software development I've got to really go knee deep--no, neck deep--in React/Angular/Vue on the front-end, MongoDB, Hadoop, etc on data, Docker/Kubernetes on the platform, Kafka on the data transfer, CQRS+ES on the transaction cycles, DDD as the foundation to argue for it all, and books to explain it all. I need to go to college, and if I don't have time or money for that I need to be studying and reading and challenging myself at all hours I am free until I am confident as a resource for any of these roles.
 
Enough of the crap reputation of being a wannabe. Let's be. 

 

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

C# | Career | General Technology | Health and Wellness | Open Source | Opinion | Pet Projects | Social Networking | Software Development | Unresolvable

Technology Status Update 2016

by Jon Davis 10. July 2016 09:09

Hello, peephole. (people.) Just a little update. I've been keeping this blog online for some time, my most recent blog entries always so negative, I keep having to see that negativity every time I check to make sure the blog's up, lol. I'm tired of it so I thought I'd post something positive.

My current job is one I hope to keep for years and years to come, and if that doesn't work out I'll be looking for one just like it and try to keep it for years and years to come. I'm so done with contracting and consulting (except the occasional mentoring session on code mentor -dot- io). I'm still developing, of course, and as technology is changing, here's what's up as I see it. 

  1. Azure is relevant. 
    image
    The world really has shifted to cloud and the majority of companies, finally, are offloading their hosting to the cloud. AWS, Azure, take your pick, everyone who hates Microsoft will obviously choose AWS but Azure is the obvious choice for Microsoft stack folks, there is nothing meaningful AWS has that Azure doesn't at this point. The amount of stuff on Azure is sufficiently terrifying in quantity and supposed quality enough to give me a thrill. So I'm done with hating on Azure, after all their marketing and nagging and pushing, Microsoft has crossed a threshold of market saturation that I am adequately impressed. I guess that means I have to be an Azure fan, too, now. Fine. Yay Azure, woo. -.-
  2. ASP.NET is officially rebooted. 
    image
    So I hear this thing called ASP.NET Core 1.0 formerly known as ASP.NET 5 formerly known as ASP.NET vNext has RTM'd, and I hear it's like super duper important. It snuck by me, I haven't mastered it, but I know it enought to know a few things:
    • It's a total redux by means of redo. It's like the Star Trek reboot except it’s smaller and there are fewer planets it can manage, but it’s exactly like the Star Trek reboot in that it will probably implode yours.
    • If you've built your career on ASP.NET and you want to continue living on ASP.NET's laurals, now is not the time to master ASP.NET 1.0 Core. Give it another year or two to mature. 
    • If you're stuck on or otherwise fascinated by non-Microsoft operating systems, namely Mac and Linux, but you want to use the Microsoft programming stack, you absolutely must learn and master ASP.NET Core 1.0 and EF7.
    • If all you liked from ASP.NET Core 1.0 was the dynamic configs and build-time transpiles, you don't need ASP.NET Core for that LOL LOL ROFLMAO LOL LOL LOL *cough*
  3. The Aurelia Javascript framework is nearly ready.
    image
    Overall, Javascript framework trends have stopped. Companies are building upon AngularJS 1.x. Everyone who’s behind is talking about React as if it was new and suddenly newly relevant (it isn’t new anymore). Everyone still implementing Knockout are out of the loop and will die off soon enough. jQuery is still ubiquitous and yet ignored as a thing, but meanwhile it just turned v3.

    I don’t know what to think about things anymore. Angular 2.0 requires TypeScript, people hate TypeScript because they hate transpilers. People are still comparing TypeScript with CoffeeScript. People are dumb. If it wasn’t for people I might like Angular 2.0, and for that matter I’d be all over AureliaJS, which is much nicer but just doesn’t have Google as the titanic marketing arm. In the end, let’s just get stuff done, guys. Build stuff. Don’t worry about frameworks. Learn them all as you need them.
  4. Node.js is fading and yet slowly growing in relevance.
    image
    Do you remember .. oh heck unless you're graying probably not, anyway .. do you remember back in the day when the dynamic Internet was first loosed on the public and C/C++ and Perl were used to execute from cgi-bin, and if you wanted to add dynamic stuff to a web site you had to learn Perl and maybe find Perl pearls and plop them into your own cgi-bin? Yeah, no, I never really learned Perl, either, but I did notice the trend, but in the end, what did C/C++ and Perl mean to us up until the last decade? Answer: ubiquitous availability, but not web server functionality, just an ever-present availability for scripts, utilities, hacks, and whatever. That is where node.js is headed. Node.js for anything web related has become and will continue to be a gigantic mess of disorganized, everyone-is-equal, noisily integrated modules that sort of work but will never be as stable in built compositions as more carefully organized platforms. Frankly, I see node.js being more relevant as a workstation runtime than a server runtime. Right now I'm looking at maybe poking at it in a TFS build environment, but not so much for hosting things.
    I will always have a bitter taste in my mouth with node.js after trying to get socket.io integrated with Express and watching the whole thing just crumble, with no documentation or community help to resolve it, and this happened not just once on the job (never resolved before I walked away) but also during a code-mentor mentoring session (which we didn't figure out), even after a good year or so of maturity of the platform after the first instance. I still like node.js but will no longer be trying to build a career on it.
  5. Pay close attention and learn up on Swagger aka OpenAPI. 
    image
    Remember when -- oh wait, no, unless you're graying, .. nevermind .. anyway, -- once upon a time something called SOAP came out and it came with it a self-documentation feature that was a combination of WSDL and some really handy HTML generated scaffolding built into web services that would let you manually test SOAP-based services by filling out a self-generated form. Well now that JSON-based REST is the entirety of the playing field, we need the same self-documention. That's where Swagger came in a couple years ago and everyone uses it now. Swagger needs some serious overhauling--someone needs to come up with a Swagger-compliant UI built on more modular and configurable components, for example--but as a drop-in self-documentation feature for REST services it fits the bill.
    • Swagger can be had on .NET using a lib called Swashbuckle. If you use OData, there is a lib called Swashbuckle.OData. We use it very, very heavily where I work. (I was the one who found it and brought it in.) "Make sure it shows up and works in Swagger" is a requirement for all REST/OData APIs we build now.
    • Swagger is now OpenAPI but it's still Swagger, there are not yet any OpenAPI artifacts that I know of other than Swagger. Which is lame. Swagger is ugly. Featureful, but ugly, and non-modular.
    • Microsoft is listed as a contributing member of the OpenAPI committee, but I don't know what that means, and I don't see any generic output from OpenAPI yet. I'm worried that Microsoft will build a black box (rather than white box) Swagger-compliant alternative for ASP.NET Core.
    • Other curious ones to pay attention to, but which I don't see as significantly supported by the .NET community yet (maybe I haven't looked hard enough), are:
  6. OData v4 has potential but is implementation-heavy and sorely needs a v5. 
    image
    A lot of investments have been made in OData v4 as a web-based facade to Entity Framework data resources. It's the foundation of everything the team I'm with is working on, and I've learned to hate it. LOL. But I see its potential. I hope investments continue because it is sorely missing fundamental features like
    • MS OData needs better navigation property filtering and security checking, whether by optionally redirecting navigation properties to EDM-mapped controller routes (yes, taking a performance hit) or some other means
    • MS OData '/$count' breaks when [ODataRoute] is declared, boo.
    • OData spec sorely needs "DISTINCT" feature
    • $select needs to be smarter about returning anonymous models and not just eliminating fields; if all you want is one field in a nested navigation property in a nested navigation property (equivalent of LINQ's .Select(x=>new {ID=x.ID, DesiredField2=x.Child.Child2.DesiredField2}), in the OData result set you will have to dive into an array and then into an array to find the one desired field
    • MS OData output serialization is very slow and CPU-heavy
    • Custom actions and functions and making them exposed to Swagger via Swashbuckle.OData make me want to pull my hair out, it takes sometimes two hours of screaming and choking people to set up a route in OData where it would take me two minutes in Web API, and in the end I end up with a weird namespaced function name in the route like /OData/Widgets/Acme.GetCompositeThingmajig(4), there's no getting away from even the default namespace and your EDM definition must be an EXACT match to what is clearly obviously spelled out in the C# controller implementation or you die. I mean, if Swashbuckle / Swashbuckle.OData can mostly figure most of it out without making us dress up in a weird Halloween costume, surely Microsoft's EDM generator should have been able to.
  7. "Simple CRUD apps" vs "messaging-oriented DDD apps"
    has become the new Microsoft vs Linux or C# vs Java or SQL vs NoSQL. 

    imageThe war is really ugly. Over the last two or three years people have really been talking about how microservices and reaction-oriented software have turned the software industry upside down. Those who hop on the bandwagon are neglecting to know when to choose simpler tooling chains for simple jobs, meanwhile those who refuse to jump on the bandwagon are using some really harsh, cruel words to describe the trend ("idiots", "morons", etc). We need to learn to love and embrace all of these forms of software, allow them to grow us up, and know when to choose which pattern for which job.
    • Simple CRUD apps can still accomplish most business needs, making them preferable most of the time
      • .. but they don't scale well
      • .. and they require relatively very little development knowledge to build and grow
    • Non-transactional message-oriented solutions and related patterns like CQRS-ES scale out well but scale developers' and testers' comprehension very poorly; they have an exponential scale of complexity footprint, but for the thrill seekers they can be, frankly, hella fun and interesting so long as they are not built upon ancient ESB systems like SAP and so long as people can integrate in software planning war rooms.
    • Disparate data sourcing as with DDD with partial data replication is a DBA's nightmare. DBAs will always hate it, their opinions will always be biased, and they will always be right in their minds that it is wrong and foolish to go that route. They will sometimes be completely correct.

  8. Integrated functional unit tests are more valuable than TDD-style purist unit tests. That’s my new conclusion about developer testing in 2016. Purist TDD mindset still has a role in the software developer’s life. But there is still value in automated integration tests, and when things like Entity Framework are heavily in play, apparently it’s better to build upon LocalDB automation than Moq.
    At least, that’s what my current employer has forced me to believe. Sadly, the purist TDD mindset that I tried to adopt and bring to the table was not even slightly appreciated. I don’t know if I’m going to burn in hell for being persuaded out of a purist unit testing mindset or not. We shall see, we shall see.
  9. I'm hearing some weird and creepy rumors I don't think I like about SQL Server moving to Linux and eventually getting itself renamed. I don't like it, I think it's unnecessary. Microsoft should just create another product. Let SQL Server be SQL Server for Windows forever. Careers are built on such things. Bad Microsoft! Windows 8, .NET Framework version name fiascos, Lync vs Skype for Business, when will you ever learn to stop breaking marketing details to fix what is already successful??!
  10. Speaking of SQL Server, SQL Server 2016 is RTM'd, and full blown SSMS 2016 is free.
  11. On-premises TFS 2015 only just recently acquired gated check-in build support in a recent update. Seriously, like, what the heck, Microsoft? It's also super buggy, you get a nasty error message in Visual Studio while monitoring its progress. This is laughable.
    • Clear message from Microsoft: "If you want a premium TFS experience, Azure / Visual Studio Online is where you have to go." Microsoft is no longer a shrink-wrapped product company, they sell shrink wrapped software only for the legacy folks as an afterthought. They are hosted platform company now all the way. .
      • This means that Windows 10 machines including Nokia devices are moving to be subscription boxes with dumb client deployments. Boo.
  12. imageAnother rumor I've heard is that
    Microsoft is going to abandon the game industry.

    The Xbox platform was awesome because Microsoft was all in. But they're not all in anymore, and it shows, and so now as they look at their lackluster profits, what did they expect?
    • Microsoft: Either stay all-in with Xbox and also Windows 10 (dudes, have you seen Steam's Big Picture mode? no excuse!) or say goodbye to the consumer market forever. Seriously. Because we who thrive on the Microsoft platform are also gamers. I would recommend knocking yourselves over to partner with Valve to co-own the whole entertainment world like the oligarchies that both of you are since Valve did so well at keeping the Windows PC relevant to the gaming markets.

For the most part I've probably lived under a rock, I'm sure, I've been too busy enjoying my new 2016 Subaru WRX (a 4-door racecar) which I am probably going to sell in the next year because I didn't get out of debt first, but not before getting a Kawasaki Vulcan S ABS Café as my first motorized two-wheeler, riding that between playing Steam games, going camping, and exploring other ways to appreciate being alive on this planet. Maybe someday I'll learn to help the homeless and unfed, as I should. BTW, in the end I happen to know that "love God and love people" are the only two things that matter in life. The rest is fluff. But I'm so selfish, man do I enjoy fluff.  I feel like such a jerk. Those who know me know that I am one. God help me.

image2016-Kawasaki-Vulcan-S-ABS-Cafe1 
image  image
Top row: Fluff that doesn’t matter and distracts me from matters of substance.
Bottom row: Matters of substance.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

ASP.NET | Blog | Career | Computers and Internet | Cool Tools | LINQ | Microsoft Windows | Open Source | Software Development | Web Development | Windows

Many Users Still Not Drinking The NuGet Kool-Aid

by Jon Davis 22. July 2015 02:40

Yesterday (I say “yesterday” as if it was a day ago, but it’s 1:50 AM and I need some sleep) three employees of my client’s and I spent an hour, plus just themselves another hour prior, trying to fuss and fiddle to get TFS 2012 solution to build a project. The problem: we were choosing not to include the NuGet packages binaries into TFS, but NuGet packages were not properly getting restored.

The team I’m currently supporting manages literally tens (more than 100, actually) different deployed I.T. solutions, some of them old, some of them new, most of them rather small, and all of them needing to be maintained on a regular basis simply because “technology”. Only a very small handful of these solutions actually use NuGet, thank heavens. The rest of the apps just go the old fashioned method of having a “lib” directory (actually a “References” folder or “ProjectReferences” folder but whatever, other companies I’ve been in call it “lib”). I’ve been with or consulted with at least 15 to 20 different companies or more over my career, and among those who knew about NuGet (about four or five or so) the majority of my experiences have seen the teams treat NuGet like a virus. And with good reason, too. Multiple projects means multiple different iterations of fetching “whatever flavor of Package X is out there”. Multiple employees means multiple different package choices. Throw it all together and you end up with a mess of four or five different versions of client-side and server-side NuGet packages all from different times by different people, and at the end of it all it’s so often a hackfest to get Visual Studio and whatever CI build server to cooperate with the whole package restore process at all.

And package restore has another fundamental flaw by design: being what it is, it actually expects the build to download dependencies in order to build. The TFS build server doesn’t have Internet access. The team will never give the TFS build server Internet access, not to fetch NuGet packages, not for any other reason. The team controls dependencies, as many enterprises should. So NuGet packages are copied out to a corporate Windows file share on the WAN. On the local developer workstations, Visual Studio is set up to only apply NuGet packages from this share. But TFS isn’t Visual Studio.

But that wasn’t even the only issue. In our case yesterday whoever worked on this solution forgot to right-click on the solution in Solution Explorer and click on the magical menu option, “Enable package restore for this solution”. Oh, we had been through this before. Hours and hours of repeat fiddling in projects past had led to team documentation about the proper process .. and, ironically, the magical menu option wasn’t even documented. Rather, “find the .nuget folder from another working NuGet-based application and bring them over into your solution’s solution folder, then manually edit the solution and project.” Great. So now we’re literally opening up the .csproj—XML gobbligook I’ve only recently become comfortable with because of what led to Fast Koala, but up to that point .csproj files were black magic to me—and throwing in custom <Target> tags and <PropertyGroup> properties and <ItemGroup> items, .. oh my! But in our case yesterday, since we did use the magical menu item, we initially tried to deploy with (mostly) defaults in the generated .nuget folder, but because of our unique solution/project team convention folder structure we had to move the packages folder around. (We also had to manually declare our Windows file share for NuGet package sources. But we had documented this.) This meant we had corrupt NuGet package references in the projects that already referenced the NuGet packages from before we set up package restore for the solution. So we reinstalled the NuGet packages. And we discovered that the binaries were then somehow not referenced. So we manually added the binary references, pointing to the packages folder. It still didn’t work on TFS. So we uninstalled the NuGet packages. And then we reinstalled the NuGet packages. And we fiddled with missing configuration details in the .csproj file. And behold, the TFS build spat out a green checkbox.

And after all that, the four of us were blinking at our machines (we were all gathered together using Lync [aka Skype for Business *rolling eyes*]) wondering, “What .. the actual .... uh what just happened? How did we get here? What is the answer to this NuGet mess?” And we all agreed: “There is no answer. NuGet is a nightmare, and you have to keep poking at it and fussing with it for an hour or two to get it to cooperate, if you didn’t do it right or know how to do it right to begin with.”

This was supposed to be a time-saver. It was supposed to make our lives easier than manually grabbing files off the third party web sites that provided them. Well I’m sorry, but after cataloguing our dependencies I think what took us two hours to troubleshoot a thing would have taken five minutes managing a lib directory, max. Sure, that’s about four minutes longer than working with the NuGet console, if every team member is consistent in how he uses NuGet. Get real!

Here’s the core issue I have with NuGet: it’s still third party. Microsoft has adopted and embraced NuGet, but MSBuild has absolutely no clue, zero, zilch, nada, no comprehension what the heck a NuGet package is. Everything is manually scripted, manually spoonfed, and these conventions with the .nuget folder and dropping a NuGet.exe in there and all this noisy config/setup crap is all workaround boilerplating that is dreadfully needing and pleading Microsoft to bake into Visual Studio, MSBuild, and TFS properly.

We had the same problem with MSDeploy. Who uses that, really? (I ask this in the same mind of, “Who uses Azure anyway?” Surely plenty but there are still many of us who don’t, yet we are still Microsoft stack developers.) MSDeploy and ClickOnce publishing is so painfully externalized black box and GUI-minded that for those of us actually trying to automate getting work done it’s easier to just write scripts. Really. In most cases it seems to be mandatory.

So Microsoft’s answer to the problem was to make the problem worse, by introducing native support for ‘bower’. Well, that’s not fair. They made the problem .. ehh .. more complicated. At least with ASP.NET 5 they no longer push client-side libs through NuGet. That fixes shifts that part of the problem to another equivalent yet completely different technology, one not maintained by the .NET community, but then again, it doesn’t need to since front-end web technology is vendor-neutral by necessity. But now we have to support ‘bower’, on top of NuGet. And what is the TFS story on this?! Oh that’s right, there is no Build process with ASP.NET 5, not really. We’re back to ASP.NET 1.1 web sites with mandatory dynamic generated code compilation. No more .csproj project manifests. Whatever’s in the file system is part of the project. The only difference here, other than gulp and bower via Node.js integration into ASP.NET where it feels like it doesn’t belong (sex with a strange cousin seven freaky generations removed?), is the new Roslyn compiler that makes performant compile-at-any-time behavior more, .. ehh, modern. But without a project manifest you now have very limited control of your project files and their behaviors. Welcome back ASP.NET 1.1 websites, we missed you. (Not.)

I’m sidetracked. This wasn’t supposed to be about ASP.NET 5, it was supposed to be about NuGet. I don’t know what the answer should be about NuGet, other than I think that if Microsoft is going to keep building upon NuGet for the development stack they need to get the convention-over-configuration notion working properly, because NuGet support here on ASP.NET MVC 5 projects with TFS CI build automation is a living configuration hell if not done carefully, sequentially.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Announcing Fast Koala, an alternative to Slow Cheetah

by Jon Davis 17. July 2015 19:47

So this is a quick FYI for teh blogrollz that I have recently been working on a little Visual Studio extension that will do for web apps what Slow Cheetah refused to do. It enables build-time transformations for both web apps and for Windows apps and classlibs. 

Here's the extension: https://visualstudiogallery.msdn.microsoft.com/7bc82ddf-e51b-4bb4-942f-d76526a922a0  

Here's the Github: https://github.com/stimpy77/FastKoala

Either link will explain more.

A Career Reset: Back To Development

by Jon Davis 14. January 2014 17:17
I haven't been posting consistently but neither has anything been consistent in my professional life lately. 2013 turned out to be the most difficult year in my 16 years of professional development--and that's saying a lot, considering from 2000 to 2004 after the dot-com bubble burst I was barely making five-digits and ate dollar meals while I worked 16-20 hours a day (and slept 12 hours, and saw daytime flip every couple days) for myself trying to learn the ins and outs of basic micro-ISV and independent consulting. 2013 was worse. It started out with me being assigned by (at the time) my consulting firm employer to fill another seat in an already overloaded project that was already sinking. A month later, the client abruptly laid off some 150-200 or so contractors, including myself, and a couple days after I got home I was laid off. Now as awfully annoying and painful as that was, I did not blame my employer for that. Their position was that they suddenly had a bunch of people on the bench without an assignment, and it was not profitable to have them all sit on the bench waiting for an assignment because bench time is still salary pay. I totally got that. It made perfect sense. And I was grateful that they said they would take me back if I came back when project opportunities opened up--so much so, in fact, that for the next four months I worked part-time as a contractor for someone I never really intended to stick around with, so that I could come back and demonstrate my loyalty.
 
What did have me scratching my head, however, was that some of the best talent in the company, including my manager, were also laid off. And when I did come back and say, "Please hire me back?", I found out that some people subsequently quit. When these changes occur, the dynamics of the entire company changes. People who were in seats of equality now sit in seats of power, and they're learning the ropes of leadership. Mistaking the role of leadership for the role of authority (they are not equivalent; being a leader has prescribed authority but only to support the leader's actual responsibilities), the worst case--and yet apparently the default scenario--is that in their immaturity in such an abrupt role, the perceived necessity on their part to backstab and cut throats makes its way into their tempted hands, the hands of their newfound authority.
 
I Disagree.
I made some obvious mistakes when I was accepted back with the company. As a new person on the assigned project (a rather intimidating one, one that had 50 or so projects in the Visual Studio solution, an existing migration from CoffeeScript to raw Javascript, and a new migration just beginning going from Backbone to Knockout), I believed it was imperative that I receive guidance, even as a senior developer, from the peers and leaders on the team regarding shared resources, without wading through hundreds of files, as well as regarding what would constitute embraceable proper coding conventions in the new migration effort. I also insisted that new feature stories be properly clarified and documented since anything assigned to me may end up on someone else's plate (i.e. I might be laid off again, or something similar). I still believe these things. The problem was, these things require the cooperation of the team, and the team was already shaken, just like I was, by recent history of layoffs and quitters. They were managing their stress by cutting themselves off and enjoying the camaraderie that they had already established within their cliques. I was the new guy, one with opinions. I was a distraction. And where I was pursuing healthy discussion, they wrote me off after only a few short words. Basically, I wasn't likeable with this particular team, and I while I hadn't forgotten how I was coming across as a person I was too dependent upon validation feedback that never came, at least not discreetly and directly from my leads in an effort to resolve the patterns. It really didn't help that I had high blood pressure and didn't have medication for it; I had headaches and my heart was pounding, I was on the edge and couldn't focus. I disagreed with disagreeable things not vehemently but perhaps pesteringly. Most regretfully, I made a huge mistake of using false-rhetoric ("I can't even read that") to deliberately make a point about unnecessary code complexity where my words if taken literally could only be taken as "I'm too stupid for this job". And unfortunately, rather than giving me direct feedback about how I came off to help me identify areas of blindness--not a single word--they relayed their frustrations up the ladder. An hour after I showed up to work one day and the team was huddled in a corner, backs turned, discussing (finally) why I deserve no respect and basically ripping me to shreds, out of nowhere, a company exectuve shuffled me onto an elevator and told me that I had been pulled off the project because I "lack skill". 
 
Besides the weaknesses observed on the part of the team, I knew what went wrong on my part as well. That four-month timespan between getting laid off and getting re-hired was spent working part-time, for a client who was not interested in modern coding conventions, and the rest of my time wasn't even spent thinking about software at all. I was taking a career break, tinkering with video editing. If I knew at the time how much this was going to cost me I would have made much different choices. This is not the sort of industry that one can afford to stop thinking in code. I showed up unprepared. This was a huge mistake, one I don't intend to ever repeat. But more importantly, the choices I made of presenting myself to the team was not a fit for that team's culture. In fact, while even today I believe I was right in my expectations of the team to accomodate discussion and documentation, I was wrong to demand it. In fact, I should have kept my mouth shut. Showing up and making demands as the new guy, and freaking out when I don't get the leadership from the leaders that I need, isn't how teams work. This was a relatively large team; developers, QA, and project management combined, there were probably around 30 people in the meeting rooms, and I didn't hesitate to open my mouth and tell those who were writing things down how they should be written. Clearly, everyone in the room learned the rule about rudeness, that the rudest thing a person can do is to call someone "rude" in front of others. But I guess in my temporary moments of thoughtlessness I was jumping around the edges of social appropriateness and trying to rely on immediate feedback to help me find that line. It didn't work. I just made myself look like an ass. It is unfortunate that no one I worked under criticized me privately afterwords. A designer fellow did, but I tended to wait to hear it from my leads. It's unfortunate that they just disappeared, and some time later I had to hear it from an executive who was relying solely on upstream feedback.
 
I spent the next three weeks on the bench. I spent that time learning how to write Android apps in "native" Java. I watched software development opportunities come in and were discussed even with new recruits, but apparently that previous project's feedback issued up to the leadership tarnished my reputation as a developer so badly that my next assignment was in system operations--a DevOps role--working remotely without a single peer and not a single person from my company working alongside me. Yes, ops. PowerShell, SCOM, IIS logs, performance reporting. I sucked at this! Maybe not so much the PowerShell stuff--actually, the PowerShell stuff I did was pretty cool, I had fun with that. But ultimately I knew going into this that it was over. I being the one who was described in a company dinner announcement as "the elephant in the room" upon my return was no longer welcome in his capacity as a developer. And every day I was stressed out about ops-related concerns and dealing with a difficult client who already had a history of contractors quitting on him was another day I couldn't get my head in gear around my passion in development. I voluntarily resigned in November.
 
I have been on the job hunt ever since. The fact is, I am indeed a senior-level software developer. I have learned my lessons, even as some of them are reminder lessons previously learned years ago. I have been trying to catch up since November on what is going on in technology space. It has been crazy over the last year or two, how much has changed. Javascript is the new de facto industry coding language, for example, and as simple a language as it is, it is a difficult language to master, and while I don't hate the language, I do struggle to enjoy reading other people's Javascript code. The coding conventions of proper development with Javascript is also widely varied, since no one entity prescribes its best practices (no, not even Douglas Crockford), and all the various most popular libraries for both client and server (Node.js) have almost completely different ideas of what constitutes "good code". My responsibility in the industry as a senior-level software professional is to digest the bulk of what is very well embraced, identify the commonalities of ideals such as the implementation of [revealing] modules (as with Require.js), disambiguate the confusing semantics (prototypal inheritance is the opposite of using object.prototype.*), appreciate the good and bad of each of the popular client libraries (Knockout, Angular, Ember, et al), glean from alternative approaches to server solutions (Node.js, Nancy, OWIN, ..), stay on top of what has changed in the .NET stack (ASP.NET MVC 5, C# 5 [async], EF 6 ..), and remain mindful of enterprise software patterns and practices (DDD, BDD, etc). I still feel like I'm scratching the surface just listing these, there's so much more than this to know, but, again, the more you know the more you realize how little you know.
 
I'm also feeling the pain, yet again, of the staleness of this blog, not just the content, which is frequently lame, but the blog software. I am probably going to rewrite my blog engine. (This one wasn't written by me anyway, though I've built them in the past.) I've been down this road before, and had some false starts, and I'll reiterate, it's not that I think that blogging software is still interesting or representative in any way of some special class of software development. Besides the details of Atom, OPML, Blogger API, etc., to the ignorant mind a blog is just a sorted list, a list of blobs of formatted text. There is more to it than that, but that's not what matters, and that's not why I need to do it at some point. Others have stated it [link], writing a blogging engine is a good step in practicing the basics of modern web development. In my case, I might take up a major tweak of Ghost, adding publishing APIs (if missing) and WYSIWYG editor support, all the while staying in Javascript and Node.JS. Why not the .NET Framework? Why, because I already know it, I don't need to prove it to myself that I know it. Then again, I might anyway. I haven't started yet. Well, I have, but multiple false starts leading to abandon leads me back at the drawing board--which is good. It forces me to check up on what's trending today

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Career

Two First Things Before Switching To Windows 8 (And A Rant On Single Identity)

by Jon Davis 29. November 2013 00:20

I am normally an eager if not early embracer, especially when it comes to Windows. Windows 8, however, was an exception. I've tried, numerous times, to embrace Windows 8, but it is such a different operating system that had so many problems at launch that I was far more eager to chuck it and continue calling Windows 7 "the best operating system ever". My biggest frustrations with it were just like everyone else's--no Start Menu, awkward accessibility, etc., since I don't use and don't want a tablet for computing, just like I don't want to put my fingerprints all over my computer monitor, hello!? But the dealbreaker was the fact that, among other things, I was trying to use Adobe Premiere Pro for video editing and I kept getting bluescreens, whereas in Windows 7 I didn't. And it wasn't random, it was unavoidable.

At this point I've gotten over the UI hell, since I've now spent more time with Windows Server 2012 than with Windows 8, and Server 2012 has the same UI more or less. But now that Windows 8.1 is out and seems to address so many of the UI and stability problems (I've yet to prove out the video driver stability improvements but I'm betting on the passing of a year's time), I got myself a little more comfortable with Windows 8.1 by replacing the host operating system for the VMWare VM I was using with my [now former] client. It was just a host OS, but it was a start. But I've moved on.

In my home office environment, making the switch to Windows 8.1 more permanently has been slowly creeping up my priority list particularly now that I have an opportunity of a clean slate, in multiple senses of the word, not the least of which is I had a freed up 500 GB SSD hard drive just begging to be added to my "power laptop" (a Dell) to complement its 2nd gen i7, its upgraded 16GB of RAM, and its already existing 500 GB SSD drive. Yes I know that 1TB SSD hard drives are out there now but I already have these. So last night and tonight have been spent poking around with the equivalent of a fresh machine, as this nice laptop is still only a year old.

The experience so far of switching up to Windows 8.1 has been smooth. The first things that went on this drive were Windows 8.1, Visual Studio 2013, and Adobe Creative Cloud (the latter of which has taken a back seat for my time but I plan on doing video editing on weekends again down the road). Oh, and Steam, of course. ;) All of these things and the apps that load under them were set up within hours and finished downloading and installing overnight while I was sleeping.

But in the last hour I ran into a concern that motivated me to post this blog entry about transitioning to Windows 8. It has to do with Microsoft accounts. Before I get into that, let me get one thing out of the way: the title mentions "two things", so the first is that if you hated Windows 8.0, try Windows 8.1, because the Windows 8.0 quirks are much more swallable now, which means that you won't be so severely distracted by all the nice new features, not the least of which is amazing startup time.

Now then. Microsoft Accounts. I want to like them. I want to love them. The truth is, I hate them. As a solution, it is oversold, and it is a reckless approach to a problem that many if not most people didn't have until Microsoft shoved the solution down their throats and made them have the problem that this is supposedly a solution for.

So before I go on ranting about that, here's the one other thing you should know. If you're tempted to follow the recommended procedure to setting up Windows 8.x but you want a local login/password for your computer or device and not one managed by your Microsoft Account, don't follow the recommended procedure. Look for small text for any opportunity to skip the association or creation of a Microsoft account for your device. But more importantly, once it is installed, even if it is a local user account, your account will be overhauled and converted to a Microsoft Account (managed online), and your username/password changed back to the Internet account username/password, unless you find that small text at every Microsoft Account login opportunity.

signin1signin2

If you want to use apps that require you to log into a Microsoft account, such as Microsoft Store, or the Games or Music apps, when your Windows profile is already a Microsoft Account profile then you might be logged in automatically, otherwise it'll prompt you and then all apps are associated with that account. You may not want to do that. I didn't want to do that. I don't want my Internet password to be my Windows password, and I certainly don't want my e-mail address to be visibly displayed to anyone who looks at my locked computer as the account name. I like "Jon". Why can't it just be "Jon"? Get off, Microsoft! Well, it's all good, I managed to stick with a local account profile, but as for these apps, there was a detail that I didn't notice until I did a triple-take--yep it took me three account retroactive conversions for me to notice the option. When you try to sign into a Microsoft Account enabled app like Games or Music and it begins the prompting process to convert your local user profile to a Microsoft Account profile, there is small text at the bottom that literally saves the day! It reads, "Sign into each app separately instead (not recommended)". No, of course it's not recommended because Microsoft wants your dependency upon their Microsoft Account cloud profile strategy to succeed so that they can win the cloud wars. *yawn* Seriously, if you want a local user profile and you didn't mind how in the last couple decades on Internet-enabled apps you had to reenter the same credenials or maintain a separate set of credentials, then yes this action is recommended.

I would also say that you should want a local user profile, and you should want to maintain separate credentials for different apps, and let me explain why.

I ran into this problem in Windows because everything I do with gaming is associated with one user profile, and everything I do with new software development is associated with another profile. But I want to log into Windows only once.

I don't want development and work related interests cluttering up my digital profile that exists for games, and I don't want my gaming interests cluttering up my digital profile that exists for development and work. Likewise, I don't want gamer friends poking around at my developer profile, and I don't want my developer friends poking around at my gaming history. Outside of Microsoft accounts, I have the same attitude about other social networks. I actually use each social network for a different kind of crowd. I use Facebook for family, church friends, and good acquaintenances I trust, and for occasional distracting entertainment. I use Twitter and Google+ for development and career interests and occasional entertainment and news. And so on. Now I know that Google+ has this great thing called circles, but here's the problem: you only get one sales pitch to the world, one profile, one face. I have a YouTube channel that has nothing to do with my work, I didn't want to put YouTubey stuff on it for co-workers and employers to see nor did I want work stuff to be seen by YouTube visitors. Fortunately Facebook and Google+ have "pages" identities, and that's a great start to helping with such problems, though I feel weird using "pages" for my alter egos rather than for products or organizations.

I have a problem with Microsoft making the problem worse. Having just one identity for every app and every web site is a bad, bad idea.

Even anonymity can be a good thing. I play my favorite game as "Stimpy" or as "Dingbat", I don't want people to know me by name, that's just creepy, who I really am is a non-essential element to my interaction with the application, except so long as I am uniquely identified and validated. I don't want to be known on the web site as "that one guy, with that fingerprint, who buys that food, who plays those games, who watches those videos, who expressed those comments". No. It's trending now to use Facebook identities and the like for comments to eliminate anonymity, and that to get commenters to stop being so malicious, but it’s making other problems worse. I don't want my Facebook friends and family to potentially know about my public comments on obscure articles and blog posts. No this isn't good, let me isolate my identity to my different interests, what I do over here, or over there, is none of your business!

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , ,

Opinion | Windows

Introducing XIO (xio.js)

by Jon Davis 3. September 2013 02:36

I spent the latter portion last week and the bulk of the holiday fleshing out the initial prototype of XIO ("ecks-eye-oh" or "zee-oh", I don't care at this point). It was intended to start out as an I/O library targeting everything (get it? X I/O, as in I/O for x), but that in turn forced me to make it a repository library with RESTful semantics. I still want to add stream-oriented functionality (WebSocket / long polling) to it to make it truly an I/O library. In the mean time, I hope people can find it useful as a consolidated interface library for storing and retrieving data.

You can access this project here: https://github.com/stimpy77/xio.js#readme

Here's a snapshot of the README file as it was at the time of this blog entry.



XIO (xio.js)

version 0.1.1 initial prototype (all 36-or-so tests pass)

A consistent data repository strategy for local and remote resources.

What it does

xio.js is a Javascript resource that supports reading and writing data to/from local data stores and remote servers using a consistent interface convention. One can write code that can be more easily migrated between storage locations and/or URIs, and repository operations are simplified into a simple set of verbs.

To write and read to and from local storage,

xio.set.local("mykey", "myvalue");
var value = xio.get.local("mykey")();

To write and read to and from a session cookie,

xio.set.cookie("mykey", "myvalue");
var value = xio.get.cookie("mykey")();

To write and read to and from a web service (as optionally synchronous; see below),

xio.post.mywebservice("mykey", "myvalue");
var value = xio.get.mywebservice("mykey")();

See the pattern? It supports localStorage, sessionStorage, cookies, and RESTful AJAX calls, using the same interface and conventions.

It also supports generating XHR functions and providing implementations that look like:

mywebservice.post("mykey", "myvalue");
var value = mywebservice.get("mykey")(); // assumes synchronous; see below
Optionally synchronous (asynchronous by default)

Whether you're working with localStorage or an XHR resource, each operation returns a promise.

When the action is synchronous, such as in working with localStorage, it returns a "synchronous promise" which is essentially a function that can optionally be immediately invoked and it will wrap .success(value) and return the value. This also works with XHR when async: false is passed in with the options during setup (define(..)).

The examples below are the same, only because XIO knows that the localStorage implementation of get is synchronous.

Aynchronous convention: var val; xio.get.local('mykey').success(function(v) { val = v; });

Synchronous convention: var val = xio.get.local('mykey')();

Generated operation interfaces

Whenever a new repository is defined using XIO, a set of supported verb and their implemented functions is returned and can be used as a repository object. For example:

var myRepository = xio.define('myRepository', { 
    url: '/myRepository?key={0}',
    methods: ["GET", "POST", "PUT", "DELETE"]
});

.. would populate the variable myRepository with:

{
    get: function(key) { /* .. */ },
    post: function(key, value) { /* .. */ },
    put: function(key, value) { /* .. */ },
    delete: function(key) { /* .. */ }
}

.. and each of these would return a promise.

XIO's alternative convention

But the built-in convention is a bit unique using xio[action][repository](key, value) (i.e.xio.post.myRepository("mykey", {first: "Bob", last: "Bison"}), which, again, returns a promise.

This syntactical convention, with the verb preceding the repository, is different from the usual convention of_object.method(key, value).

Why?!

The primary reason was to be able to isolate the repository from the operation, so that one could theoretically swap out one repository for another with minimal or no changes to CRUD code. For example,

var repository = "local"; // use localStorage for now; 
                          // replace with "my_restful_service" when ready 
                          // to integrate with the server
xio.post[repository](key, value).complete(function() {

    xio.get[repository](key).success(function(val) {
        console.log(val);
    });

});

Note here how "repository" is something that can move around. The goal, therefore, is to make disparate repositories such as localStorage and RESTful web service targets support the same features using the same interface.

As a bit of an experiment, this convention of xio[verb][repository] also seems to read and write a little better, even if it's a bit weird at first to see. The thinking is similar to the verb-target convention in PowerShell. Rather than taking a repository and working with it independently with assertions that it will have some CRUD operations available, the perspective is flipped and you are focusing on what you need to do, the verbs, first, while the target becomes more like a parameter or a known implementation of that operation. The goal is to dumb down CRUD operation concepts and repositories and refocus on the operations themselves so that, rather than repositories having an unknown set of operations with unknown interface styles and other features, instead, your standard CRUD operations, which are predictable, have a set of valid repository targets that support those operations.

This approach would have been entirely unnecessary and pointless if Javascript inherently supported interfaces, because then we could just define a CRUD interface and write all our repositories against those CRUD operations. But it doesn't, and indeed with the convention of closures and modules, it really can't.

Meanwhile, when you define a repository with xio.define(), as was described above and detailed again below, it returns an object that contains the operations (get(), post(), etc) that it supports. So if you really want to use the conventional repository[method](key, value) approach, you still can!

Download

Download here: https://raw.github.com/stimpy77/xio.js/master/src/xio.js

To use the whole package (by cloning this repository)

.. and to run the Jasmine tests, you will need Visual Studio 2012 and a registration of the .json file type with IIS / IIS Express MIME types. Open the xio.js.csproj file.

Dependencies

jQuery is required for now, for XHR-based operations, so it's not quite ready for node.js. This dependency requirement might be dropped in the future.

Basic verbs

See xio.verbs:

  • get(key)
  • set(key, value); used only by localStorage, sessionStorage, and cookie
  • put(key, data); defaults to "set" behavior when using localStorage, sessionStorage, or cookie
  • post(key, data); defaults to "set" behavior when using localStorage, sessionStorage, or cookie
  • delete(key)
  • patch(key, patchdata); implemented based on JSON/Javascript literals field sets (send only deltas)
Examples
// initialize

var xio = Xio(); // initialize a module instance named "xio"
localStorage
xio.set.local("my_key", "my_value");
var val = xio.get.local("my_key")();
xio.delete.local("my_key");

// or, get using asynchronous conventions, ..    
var val;
xio.get.local("my_key").success(function(v) 
    val = v;
});

xio.set.local("my_key", {
    first: "Bob",
    last: "Jones"
}).complete(function() {
    xio.patch.local("my_key", {
        last: "Jonas" // keep first name
    });
});
sessionStorage
xio.set.session("my_key", "my_value");
var val = xio.get.session("my_key")();
xio.delete.session("my_key");
cookie
xio.set.cookie(...)

.. supports these arguments: (key, value, expires, path, domain)

Alternatively, retaining only the xio.set["cookie"](key, value), you can automatically returned helper replacer functions:

xio.set["cookie"](skey, svalue)
    .expires(Date.now() + 30 * 24 * 60 * 60000))
    .path("/")
    .domain("mysite.com");

Note that using this approach, while more expressive and potentially more convertible to other CRUD targets, also results in each helper function deleting the previous value to set the value with the new adjustment.

session cookie
xio.set.cookie("my_key", "my_value");
var val = xio.get.cookie("my_key")();
xio.delete.cookie("my_key");
persistent cookie
xio.set.cookie("my_key", "my_value", new Date(Date.now() + 30 * 24 * 60 * 60000));
var val = xio.get.cookie("my_key")();
xio.delete.cookie("my_key");
web server resource (basics)
var define_result =
    xio.define("basic_sample", {
                url: "my/url/{0}/{1}",
                methods: [ xio.verbs.get, xio.verbs.post, xio.verbs.put, xio.verbs.delete ],
                dataType: 'json',
                async: false
            });
var promise = xio.get.basic_sample([4,12]).success(function(result) {
   // ..
});
// alternatively ..
var promise_ = define_result.get([4,12]).success(function(result) {
   // ..
});

The define() function creates a verb handler or route.

The url property is an expression that is formatted with the key parameter of any XHR-based CRUD operation. The key parameter can be a string (or number) or an array of strings (or numbers, which are convertible to strings). This value will be applied to the url property using the same convention as the typical string formatters in other languages such as C#'s string.Format().

Where the methods property is defined as an array of "GET", "POST", etc, for each one mapping to standard XIO verbs an XHR route will be internally created on behalf of the rest of the options defined in the options object that is passed in as a parameter to define(). The return value of define() is an object that lists all of the various operations that were wrapped for XIO (i.e. get(), post(), etc).

The rest of the options are used, for now, as a jQuery's $.ajax(..., options) parameter. The async property defaults to false. When async is true, the returned promise is wrapped with a "synchronous promise", which you can optionally immediately invoke with parens (()) which will return the value that is normally passed into .success(function (value) { .. }.

In the above example, define_result is an object that looks like this:

{
    get: function(key) { /* .. */ },
    post: function(key, value) { /* .. */ },
    put: function(key, value) { /* .. */ },
    delete: function(key) { /* .. */ }
}

In fact,

define_result.get === xio.get.basic_sample

.. should evaluate to true.

Sample 2:

var ops = xio.define("basic_sample2", {
                get: function(key) { return "value"; },
                post: function(key,value) { return "ok"; }
            });
var promise = xio.get["basic_sample2"]("mykey").success(function(result) {
   // ..
});

In this example, the get() and post() operations are explicitly declared into the defined verb handler and wrapped with a promise, rather than internally wrapped into XHR/AJAX calls. If an explicit definition returns a promise (i.e. an object with .success and .complete), the returned promise will not be wrapped. You can mix-and-match both generated XHR calls (with the url and methods properties) as well as custom implementations (with explicit get/post/etc properties) in the options argument. Custom implementations will override any generated implementations if they conflict.

web server resource (asynchronous GET)
xio.define("specresource", {
                url: "spec/res/{0}",
                methods: [xio.verbs.get],
                dataType: 'json'
            });
var val;
xio.get.specresource("myResourceAction").success(function(v) { // gets http://host_server/spec/res/myResourceAction
    val = v;
}).complete(function() {
    // continue processing with populated val
});
web server resource (synchronous GET)
xio.define("synchronous_specresources", {
                url: "spec/res/{0}",
                methods: [xio.verbs.get],
                dataType: 'json',
                async: false // <<==!!!!!
            });
var val = xio.get.synchronous_specresources("myResourceAction")(); // gets http://host_server/spec/res/myResourceAction
web server resource POST
xio.define("contactsvc", {
                url: "svcapi/contact/{0}",
                methods: [ xio.verbs.get, xio.verbs.post ],
                dataType: 'json'
            });
var myModel = {
    first: "Fred",
    last: "Flinstone"
}
var val = xio.post.contactsvc(null, myModel).success(function(id) { // posts to http://host_server/svcapi/contact/
    // model has been posted, new ID returned
    // validate:
    xio.get.contactsvc(id).success(function(contact) {  // gets from http://host_server/svcapi/contact/{id}
        expect(contact.first).toBe("Fred");
    });
});
web server resource (DELETE)
xio.delete.myresourceContainer("myresource");
web server resource (PUT)
xio.define("contactsvc", {
                url: "svcapi/contact/{0}",
                methods: [ xio.verbs.get, xio.verbs.post, xio.verbs.put ],
                dataType: 'json'
            });
var myModel = {
    first: "Fred",
    last: "Flinstone"
}
var val = xio.post.contactsvc(null, myModel).success(function(id) { // posts to http://host_server/svcapi/contact/
    // model has been posted, new ID returned
    // now modify:
    myModel = {
        first: "Carl",
        last: "Zeuss"
    }
    xio.put.contactsvc(id, myModel).success(function() {  /* .. */ }).error(function() { /* .. */ });
});
web server resource (PATCH)
xio.define("contactsvc", {
                url: "svcapi/contact/{0}",
                methods: [ xio.verbs.get, xio.verbs.post, xio.verbs.patch ],
                dataType: 'json'
            });
var myModel = {
    first: "Fred",
    last: "Flinstone"
}
var val = xio.post.contactsvc(null, myModel).success(function(id) { // posts to http://host_server/svcapi/contact/
    // model has been posted, new ID returned
    // now modify:
    var myModification = {
        first: "Phil" // leave the last name intact
    }
    xio.patch.contactsvc(id, myModification).success(function() {  /* .. */ }).error(function() { /* .. */ });
});
custom implementation and redefinition
xio.define("custom1", {
    get: function(key) { return "teh value for " + key};
});
xio.get.custom1("tehkey").success(function(v) { alert(v); } ); // alerts "teh value for tehkey";
xio.redefine("custom1", xio.verbs.get, function(key) { return "teh better value for " + key; });
xio.get.custom1("tehkey").success(function(v) { alert(v); } ); // alerts "teh better value for tehkey"
var custom1 = 
    xio.redefine("custom1", {
        url: "customurl/{0}",
        methods: [xio.verbs.post],
        get: function(key) { return "custom getter still"; }
    });
xio.post.custom1("tehkey", "val"); // asynchronously posts to URL http://host_server/customurl/tehkey
xio.get.custom1("tehkey").success(function(v) { alert(v); } ); // alerts "custom getter still"

// oh by the way,
for (var p in custom1) {
    if (custom1.hasOwnProperty(p) && typeof(custom1[p]) == "function") {
        console.log("custom1." + p); // should emit custom1.get and custom1.post
    }
}

Future intentions

WebSockets and WebRTC support

The original motivation to produce an I/O library was actually to implement a WebSockets client that can fallback to long polling, and that has no dependency upon jQuery. Instead, what has so far become implemented has been a standard AJAX interface that depends upon jQuery. Go figure.

If and when WebSocket support gets added, the next step will be WebRTC.

Meanwhile, jQuery needs to be replaced with something that works fine on nodejs.

Additionally, in a completely isolated parallel path, if no progress is made by the ASP.NET SignalR team to make the SignalR client freed from jQuery, xio.js might become tailored to be a somewhat code compatible client implementation or a support library for a separate SignalR client implementation.

Service Bus, Queuing, and background tasks support

At an extremely lightweight scale, I do want to implement some service bus and queue features. For remote service integration, this would just be more verbs to sit on top of the existing CRUD operations, as well as WebSockets / long polling / SignalR integration. This is all fairly vague right now because I am not sure yet what it will look like. On a local level, however, I am considering integrating with Web Workers. It might be nice to use XIO to manage deferred I/O via the Web Workers feature. There are major limitations to Web Workers, however, such as no access to the DOM, so I am not sure yet.

Other notes

If you run the Jasmine tests, make sure the .json file type is set up as a mime type. For example, IIS and IIS Express will return a 403 otherwise. Google reveals this: http://michaellhayden.blogspot.com/2012/07/add-json-mime-type-to-iis-express.html

License

The license for XIO is pending, as it's not as important to me as getting some initial feedback. It will definitely be an attribution-based license. If you use xio.js as-is, unchanged, with the comments at top, you definitely may use it for any project. I will drop in a license (probably Apache 2 or BSD or Creative Commons Attribution or somesuch) in the near future.

A Consistent Approach To Client-Side Cache Invalidation

by Jon Davis 10. August 2013 17:40

Download the source code for this blog entry here: ClientSideCacheInvalidation.zip

TL;DR?

Please scroll down to the bottom of this article to review the summary.

I ran into a problem not long ago where some JSON results from an AJAX call to an ASP.NET MVC JsonResult action were being cached by the browser, quite intentionally by design, but were no longer up-to-date, and without devising a new approach to route manipulation or any of the other fundamental infrastructural designs for the endpoints (because there were too many) our hands were tied. The caching was being done using the ASP.NET OutputCacheAttribute on the action being invoked in the AJAX call, something like this (not really, but this briefly demonstrates caching):

[OutputCache(Duration = 300)]
public JsonResult GetData()
{
return Json(new
{
LastModified = DateTime.Now.ToString()
}, JsonRequestBehavior.AllowGet);
}
@model dynamic
@{
ViewBag.Title = "Home";
}
<h2>Home</h2>
<div id="results"></div>
<div><button id="reload">Reload</button></div>
@section scripts {
<script>
var $APPROOT = "@Url.Content("~/")";
$.getJSON($APPROOT + "Home/GetData", function (o) {
$('#results').text("Last modified: " + o.LastModified);
});
$('#reload').on('click', function() {
window.location.reload();
});
</script>
}

Since we were using a generalized approach to output caching (as we should), I knew that any solution to this problem should also be generalized. My first thought was in the mistaken assumption that the default [OutputCache] behavior was to rely on client-side caching, since client-side caching was what I was observing while using Fiddler. (Mind you, in the above sample this is not the case, it is actually server-side, but this is probably because of the amount of data being transferred. I’ll explain after I explain what I did in my false assumption.)

Microsoft’s default convention for implementing cache invalidation is to rely on “VaryBy..” semantics, such as varying the route parameters. That is great except that the route and parameters were currently not changing in our implementation.

So, my initial proposal was to force the caching to be done on the server instead of on the client, and to invalidate when appropriate.

 

public JsonResult DoSomething()
{
// 
// Do something here that has a side-effect
// of making the cached data stale
// 
Response.RemoveOutputCacheItem(Url.Action("GetData"));
return Json("OK");
}
[OutputCache(Duration = 300, Location = OutputCacheLocation.Server)]
public JsonResult GetData()
{
return Json(new
{
LastModified = DateTime.Now.ToString()
}, JsonRequestBehavior.AllowGet);
}

 

 

<button id="invalidate">Invalidate</button></div>

 

 

$('#invalidate').on('click', function() {
$.post($APPROOT + "Home/DoSomething", null, function(o) {
window.location.reload();
}, 'json');
});

 

image
While Reload has no effect on the Last modified value, the
Invalidate button causes the date to increment.

When testing, this actually worked quite well. But concerns were raised about the payload of memory on the server. Personally I think the memory payload in practically any server-side caching is negligible, certainly if it is small enough that it would be transmitted over the wire to a client, so long as it is measured in kilobytes or tens of kilobytes and not megabytes. I think the real concern is that transmission; the point of caching is to make the user experience as smooth and seamless as possible with minimal waiting, so if the user is waiting for a (cached) payload, while it may be much faster than the time taken to recalculate or re-acquire the data, it is still measurably slower than relying on browser cache.

The default implementation of OutputCacheAttribute is actually OutputCacheLocation.Any. This indicates that the cached item can be cached on the client, on a proxy server, or on the web server. From my tests, for tiny payloads, the behavior seemed to be caching on the server and no caching on the client; for a large payload from GET requests with querystring parameters seemed to be caching on the client but with an HTTP query with an “If-Modified-Since” header, resulting in a 304 Not Modified on the server (indicating it was also cached on the server but verified by the server that the client’s cache remains valid); and for a large payload from GET requests with all parameters in the path, the behavior seemed to be caching on the client without any validation checking from the client (no HTTP request for an If-Modified-Since check). Now, to be quite honest I am only guessing that these were the distinguishing factors of these behavior observations. Honestly, I saw variations of these behaviors happening all over the place as I tinkered with scenarios, and this was the initial pattern I felt I was observing.

At any rate, for our purposes we were currently stuck with relying on “Any” as the location, which in theory would remove server-side caching if the server ran short on RAM (in theory, I don’t know, although the truth can probably be researched, which I don’t have time to get into). The point of all this is, we have client-side caching that we cannot get away from.

So, how do you invalidate the client-side cache? Technically, you really can’t. The browser controls the cache bucket and no browsers provide hooks into the cache to invalidate them. But we can get smart about this, and work around the problem, by bypassing the cached data. Cached HTTP results are stored on the basis of varying by the full raw URL on HTTP GET methods, they are cached with an expiration (in the above sample’s case, 300 seconds, or 5 minutes), and are only cached if allowed to be cached in the first place as per the HTTP header directives in the HTTP response. So, to bypass the cache you don’t cache, or you need to know up front how long the cache should remain until it expires—neither of these being acceptable in a dynamic application—or you need to use POST instead of GET, or you need to vary up the URL.

Microsoft originally got around the caching problem in ASP.NET 1.x by forcing the “normal” development cycle in the lifecycle of <form> tags that always used the POST method over HTTP. Responses from POST requests are never cached. But POSTing is not clean as it does not follow the semantics of the verbiage if nothing is being sent up and data is only being retrieved.

You can also use ETag in the HTTP headers, which isn’t particularly helpful in a dynamic application as it is no different from a URL + expiration policy.

To summarize, to control cache:

  • Disable caching from the server in the Response header (Pragma: no-cache)
  • Predict the lifetime of the content and use an expiration policy
  • Use POST not GET
  • Etag
  • Vary the URL (case-sensitive)

Given our options, we need to vary up the URL. There a number of approaches to this, but almost all of the approaches involve relying on appending or modifying the querystring with parameters that are expected to be ignored by the server.

$.getJSON($APPROOT + "Home/GetData?_="+Date.now(), function (o) {
$('#results').text("Last modified: " + o.LastModified);
});

In this sample, the URL is appended with “?_=”+Date.now(), resulting in this URL in the GET:

/Home/GetData?_=1376170287015

This technique is often referred to as cache-busting. (And if you’re reading this blog article, you’re probably rolling your eyes. “Duh.”) jQuery inherently supports cache-busting, but it does not do it on its own from $.getJSON(), it only does it in $.ajax() when the options parameter includes {cache: false}, unless you invoke $.ajaxSetup({ cache: false }); first to disable all caching. Otherwise, for $.getJSON() you would have to do it manually by appending the URL. (Alright, you can stop rolling your eyes at me now, I’m just trying to be thorough here..)

This is not our complete solution. We have a couple problems we still have to solve.

First of all, in a complex client codebase, hacking at the URL from application logic might not be the most appropriate approach. Consider if you’re using Backbone.js with routes that synchronize objects to and from the server. It would be inappropriate to modify the routes themselves just for cache invalidation. A more generalized cache invalidation technique needs to be implemented in the XHR-invoking AJAX function itself. The approach in doing this will depend upon your Javascript libraries you are using, but, for example, if jQuery.getJSON() is being used in application code, then jQuery.getJSON itself could perhaps be replaced with an invalidation routine.

var gj = $.getJSON;
$.getJSON = function (url, data, callback) {
url = invalidateCacheIfAppropriate(url); // todo: implement something like this
return gj.call(this, url, data, callback);
};

This is unconventional and probably a bad example since you’re hacking at a third party library, a better approach might be to wrap the invocation of $.getJSON() with an application function.

var getJSONWrapper = function (url, data, callback) {
url = invalidateCacheIfAppropriate(url); // todo: implement something like this
return $.getJSON(url, data, callback);
};

And from this point on, instead of invoking $.getJSON() in application code, you would invoke getJSONWrapper, in this example.

The second problem we still need to solve is that the invalidation of cached data that derived from the server needs to be triggered by the server because it is the server, not the client, that knows that client cached data is no longer up-to-date. Depending on the application, the client logic might just know by keeping track of what server endpoints it is touching, but it might not! Besides, a server endpoint might have conditional invalidation triggers; the data might be stale given specific conditions that only the server may know and perhaps only upon some calculation. In other words, invalidation needs to be pushed by the server.

One brute force, burdensome, and perhaps a little crazy approach to this might be to use actual “push technology”, formerly “Comet” or “long-polling”, now WebSockets, implemented perhaps with ASP.NET SignalR, where a connection is maintained between the client and the server and the server then has this open socket that can push invalidation flags to the client.

We had no need for that level of integration and you probably don’t either, I just wanted to mention it because it might come back as food for thought for a related solution. One scenario I suppose where this might be useful is if another user of the web application has caused the invalidation, in which case the current user will not be in the request/response cycle to acquire the invalidation flag. Otherwise, it is perhaps a reasonable assumption that invalidation is only needed, and only triggered, in the context of a user’s own session. If not, perhaps it is a “good enough” assumption even if it is sometimes not true. The expiration policy can be set low enough that a reasonable compromise can be made between the current user’s changes and changes invoked by other systems or other users.

While we may not know what server endpoint might introduce the invalidation of client cache data, we could assume that the invalidation will be triggered by any server endpoint(s), and build invalidation trigger logic on the response of server HTTP responses.

To begin implementing some sort of invalidation trigger on the server I could flag invalidations to the client using HTTP header(s).

public JsonResult DoSomething()
{
//
// Do something here that has a side-effect
// of making the cached data stale
//
InvalidateCacheItem(Url.Action("GetData"));
return Json("OK");
}
public void InvalidateCacheItem(string url)
{
Response.RemoveOutputCacheItem(url); // invalidate on server
Response.AddHeader("X-Invalidate-Cache-Item", url); // invalidate on client
}
[OutputCache(Duration = 300)]
public JsonResult GetData()
{
return Json(new
{
LastModified = DateTime.Now.ToString()
}, JsonRequestBehavior.AllowGet);
}

At this point, the server is emitting a trigger to the HTTP client that says that “as a result of a recent operation, that other URL, the one for GetData, is no longer valid for your current cache, if you have one”. The header alone can be handled by different client implementations (or proxies) in different ways. I didn’t come across any “standard” HTTP response that does this “officially”, so I’ll come up with a convention here.

image

Now we need to handle this on the client.

Before I do anything first of all I need to refactor the existing AJAX functionality on the client so that instead of using $.getJSON, I might use $.ajax or some other flexible XHR handler, and wrap it all in custom functions such as httpGET()/httpPOST() and handleResponse().

var httpGET = function(url, data, callback) {
return httpAction(url, data, callback, "GET");
};
var httpPOST = function (url, data, callback) {
return httpAction(url, data, callback, "POST");
};
var httpAction = function(url, data, callback, method) {
url = cachebust(url);
if (typeof(data) === "function") {
callback = data;
data = null;
}
$.ajax(url, {
data: data,
type: "GET",
success: function(responsedata, status, xhr) {
handleResponse(responsedata, status, xhr, callback);
}
});
};
var handleResponse = function (data, status, xhr, callback) {
handleInvalidationFlags(xhr);
callback.call(this, data, status, xhr);
};
function handleInvalidationFlags(xhr) {
// not yet implemented
};
function cachebust(url) {
// not yet implemented
return url;
};
// application logic
httpGET($APPROOT + "Home/GetData", function(o) {
$('#results').text("Last modified: " + o.LastModified);
});
$('#reload').on('click', function() {
window.location.reload();
});
$('#invalidate').on('click', function() {
httpPOST($APPROOT + "Home/Invalidate", function (o) {
window.location.reload();
});
});

At this point we’re not doing anything yet, we’ve just broken up the HTTP/XHR functionality into wrapper functions that we can now modify to manipulate the request and to deal with the invalidation flag in the response. Now all our work will be in handleInvalidationFlags() for capturing that new header we just emitted from the server, and cachebust() for hijacking the URLs of future requests.

To deal with the invalidation flag in the response, we need to detect that the header is there, and add the cached item to a cached data set that can be stored locally in the browser with web storage. The best place to put this cached data set is in sessionStorage, which is supported by all current browsers. Putting it in a session cookie (a cookie with no expiration flag) works but is less ideal because it adds to the payload of all HTTP requests. Putting it in localStorage is less ideal because we do want the invalidation flag(s) to go away when the browser session ends, because that’s when the original browser cache will expire anyway. There is one caveat to sessionStorage: if a user opens a new tab or window, the browser will drop the sessionStorage in that new tab or window, but may reuse the browser cache. The only workaround I know of at the moment is to use localStorage (permanently retaining the invalidation flags) or a session cookie. In our case, we used a session cookie.

Note also that IIS is case-insensitive on URI paths, but HTTP itself is not, and therefore browser caches will not be. We will need to ignore case when matching URLs with cache invalidation flags.

Here is a more or less complete client-side implementation that seems to work in my initial test for this blog entry.

function handleInvalidationFlags(xhr) {
// capture HTTP header
var invalidatedItemsHeader = xhr.getResponseHeader("X-Invalidate-Cache-Item");
if (!invalidatedItemsHeader) return;
invalidatedItemsHeader = invalidatedItemsHeader.split(';');
// get invalidation flags from session storage
var invalidatedItems = sessionStorage.getItem("invalidated-cache-items");
invalidatedItems = invalidatedItems ? JSON.parse(invalidatedItems) : {};
// update invalidation flags data set
for (var i in invalidatedItemsHeader) {
invalidatedItems[prepurl(invalidatedItemsHeader[i])] = Date.now();
}
// store revised invalidation flags data set back into session storage
sessionStorage.setItem("invalidated-cache-items", JSON.stringify(invalidatedItems));
}
// since we're using IIS/ASP.NET which ignores case on the path, we need a function to force lower-case on the path
function prepurl(u) {
return u.split('?')[0].toLowerCase() + (u.indexOf("?") > -1 ? "?" + u.split('?')[1] : "");
}
function cachebust(url) {
// get invalidation flags from session storage
var invalidatedItems = sessionStorage.getItem("invalidated-cache-items");
invalidatedItems = invalidatedItems ? JSON.parse(invalidatedItems) : {};
// if item match, return concatonated URL
var invalidated = invalidatedItems[prepurl(url)];
if (invalidated) {
return url + (url.indexOf("?") > -1 ? "&" : "?") + "_nocache=" + invalidated;
}
// no match; return unmodified
return url;
}

Note that the date/time value of when the invalidation occurred is permanently stored as the concatenation value. This allows the data to remain cached, just updated to that point in time. If invalidation occurs again, that concatenation value is revised to the new date/time.

Running this now, after invalidation is triggered by the server, the subsequent request of data is appended with a cache-buster querystring field.

image

 

In Summary, ..

.. a consistent approach to client-side cache invalidation triggered by the server might be by following these steps.

  1. Use X-Invalidate-Cache-Item as an HTTP response header to flag potentially cached URLs as expired. You might consider using a semicolon-delimited response to list multiple items. (Do not URI-encode the semicolon when using it as a URI list delimiter.) Semicolon is a reserved/invalid character in URI and is a valid delimiter in HTTP headers, so this is valid.
  2. Someday, browsers might support this HTTP response header by automatically invalidating browser cache items declared in this header, which would be awesome. In the mean time ...
  3. Capture these flags on the client into a data set, and store the data set into session storage in the format:
    		{
    	"http://url.com/route/action": (date_value_of_invalidation_flag),
    	"http://url.com/route/action/2": (date_value_of_invalidation_flag)
    	}
    	
  4. Hijack all XHR requests so that the URL is appropriately appended with cachebusting querystring parameter if the URL was found in the invalidation flags data set, i.e. http://url.com/route/action becomes something like http://url.com/route/action?_nocache=(date_value_of_invalidation_flag), being sure to hijack only the XHR request and not any logic that generated the URL in the first place.
  5. Remember that IIS and ASP.NET by default convention ignore case (“/Route/Action” == “/route/action”) on the path, but the HTTP specification does not and therefore the browser cache bucket will not ignore case. Force all URL checks for invalidation flags to be case-insensitive to the left of the querystring (if there is a querystring, otherwise for the entire URL).
  6. Make sure the AJAX requests’ querystring parameters are in consistent order. Changing the sequential order of parameters may be handled the same on the server but will be cached differently on the client.
  7. These steps are for “pull”-based XHR-driven invalidation flags being pulled from the server via XHR. For “push”-based invalidation triggered by the server, consider using something like a SignalR channel or hub to maintain an open channel of communication using WebSockets or long polling. Server application logic can then invoke this channel or hub to send an invalidation flag to the client or to all clients.
  8. On the client side, an invalidation flag “push” triggered in #7 above, for which #1 and #2 above would no longer apply, can still utilize #3 through #6.

You can download the project I used for this blog entry here: ClientSideCacheInvalidation.zip

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

ASP.NET | C# | Javascript | Techniques | Web Development


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  October 2019  >>
MoTuWeThFrSaSu
30123456
78910111213
14151617181920
21222324252627
28293031123
45678910

View posts in large calendar