Fediverse Microblogging Protocols: Part 1

by Jon Davis 3. November 2019 15:19

You may or may not have heard of "the Fediverse". No? How about Mastodon? If not that, surely you have heard of gab.com

Gab.com was rebuilt a year or so ago upon Mastodon's open source software system which is a microblogging web platform. It is a branded part of the Fediverse, which is a series of microblogging sites that are all able and potentially willing (via a human decision and a configuration) to talk to each other. Effectively, this is fail-safe distributed computing with social microblogging (Twitter clones) as the form of application. The objective with these frameworks was to establish a censorship-proof system of web hosting, so that people who wanted to enjoy microblogging topics that were considered outside of generally socially acceptable could always have a place to go on the Internet, because the microblogging hosts are effectively proxied out by many servers. One might say it is the dark web originating on the open, public web/Internet. 

The Fediverse was also created largely by the alt-left. These are the bizarre perverts of the world, such as nudists, or furries, people who fantasize bestiality typically via hand-drawn anime art or cosplay. Until recently, sexual deviants were considered too far strayed from the norms of society to be allowed a place on the Internet, so censorship was frequent, which is a key motivator of the Fediverse. However, gab.com, which also struggles with a history of censorship, facilitates thoughts and content from people from all walks of life, including some people on the extreme alt-right--Neo-Nazis and the like. So gab.com forked Mastodon to recreate the gab.com social network on the Mastodon framework, replacing, adding, and removing features according to Gab's whim's and needs. Needless to say, the Masdodon community proved furious about this, that alt-right zealots are allowed to have an uncensored platform, but they made this bed. 

Meanwhile I personally have been very studiously monitoring social networking technologies, companies, and trends since the beginning. I was a participant from the beginning. Like, the very, very beginning. GeoCities was the socially driven personal expression platform of choice at the beginning. My first web site was on GeoCities, until I started hosting my own domain. Then came MySpace and blogging. Everyone had "a MySpace". Then, from out of nowhere, Facebook took off, and the personalized rebranding of an individual's personal profile on MySpace was deemed exotically ugly and forever forgotten in favor of the standardized look and feel of Facebook's fixed layout and feel. All the while, Facebook's mixing of "wall" posts from various users--friends and followed feeds--was something new and amazing. It filled the gap lost by the antiquation of NNTP newsgroups and message boards. It completely changed the landscape of Internet social networking. Then came Twitter and YouTube. With YouTube, sharing your life on video became not just possible, it became a normal way for people to experience pseudo-relationships of mindsharing, complete with facial expressions and body language, with the connected world around them. The people of the world became truly cohesive, the Internet was the glue.

And then people who had traditionally held power over people's minds began to panic. Facebook and Google and the like saw their roles in the world as guidance mechanisms for swaying world opinion, and mainstream media (MSM) began to have their published opinions (*cough* .. "news") prioritized as curated "Trends". YouTube stopped allowing the popularity of everyday people to reveal their priority in searches and top-level browsing, and instead now prioritizes packaged infotainment like CNN as the primary resources to be found for any given topic. And Twitter, Facebook, YouTube, Instagram, and so on, these are all perpetually being mentioned in the (alternative) news headlines for censoring people for sharing thoughts and opinions that go against the preconceived narratives that mainstream media and big tech employees (most of which are in coastal-state or otherwise high population cities, I might add, notably San Francisco, Portland, Los Angeles, and New York City) are trying to guide the world to embrace.

This is unacceptable, but at the same time expected. As a conservative Christian, I must admit that we had "quite a ride" of privileged authority in driving society for so many decades, centuries, yet we were told by Jesus to expect that the world would hate us, that we would be the underdog. That hasn't happened in our Western society, not really, not in my lifetime. So I can't be angry that conservatism has started to move towards the back seat. Nor should I suggest we are somehow "victims". We're not. The victims are not in the West. The victims are in China, in Pakistan, in India, all over the place, other than the West. The West's time is coming. It isn't here yet, despite "progress" in multiculturalism efforts. But it's coming. This has been instilled in me since pre-Matrix anticipation even in the music I listened to as a youth.

So my interest is in watching for, supporting, and participating in censor-proof platforms, "for everyone" but including conservative Christians. This is why I've invested in Gab. I've paid for a lifetime pro membership with Gab. But I've only just gotten started.

Gab.com made their software (their fork of Mastodon) open source and published it at https://code.gab.com/. They invite the Internet community to clone it and set up Gab Social instances all over the Internet. Presumably people have. So, I tried to. I can't. Trying on both my home workstation and on my laptop, both which run on Windows 10, I am stuck with local setup failure, starting the error that webpack-dev-server never actually got itself up. Gab Social's repository doesn't seem to have a support group, there doesn't seem to be a Gab Social group on gab.com, and no one replied to my post on gab, so I'm left in the dark here, and with this I'm also seeing some serious community-related limitations of the platform in the first place.

My motives to run Gab Social and work on its code myself were indeterminate, it's not truly open source for the gab.com platform so much as disclosed source, but at minimum I wanted to see what it consisted of. But even though I can't get it up and running, and be able to adapt it to my needs and interests--without, say, who knows, maybe somehow $buying an audience with someone at gab?--I can at least see on the surface what indeed it consists of.

The Gab Social software stack consists of:

  • PostgreSQL,
  • Ruby on Rails,
  • Node.JS, 
  • Redis, and
  • ReactJS,
  • with deployments on Docker
Knowing that the software is based on Mastodon, which is a branded implementation player on the Fediverse, this means that it runs on one or more of these protocols:
 
Knowing the protocol(s) used for Fediverse participating software is important because it means that I as a Microsoft stack .NET developer might be able to build a microblogging-oriented social network infrastructure that is compatible with Gab Social, and perhaps integrate with it (at least to the extent of it being on the same Fediverse), without actually utilizing any of its RoR/NodeJS/etc implementation, not that I'm fundamentally opposed to RoR/NodeJS/etc.
 
I will be following up with a Part 2 of this blog entry when I have done more homework on these and have come to a better understanding of what's been going on here. Sadly, all of the bullet points in the last bullet points list above are foreign to me, which means I haven't been monitoring social networking tech as closely as I should have, which is particulalry shameful considering I have a number of false starts of social network platform framework side projects over the years, albeit those efforts effectively stopped right around the time these seem to have begun. 

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , ,

Blog | General Technology | Software Development | Web Development | Microblogging

Time to blow off some dust, toot my own horn, and hit the books

by Jon Davis 14. October 2019 23:59

Woah, Nelly!

Jon Davis's blog is back up! Last post was .. more than three years ago! Daaaang! How ya'll been?! Missed ya'll! Miss me?

I'm just posting a quick update here to let ya'll know that the gears in my tech world have started turning again, and to update ya'll on what's going on.

You may or may not have noticed that my root web site http://jondavis.net/ went through a couple phases and got stuck with that dorky "please wait" console-ish page. I was supposed to have replaced it by *cough* .. 2016. It's now approaching the year 2020. What happened? Well what happened was I got a few kicks in the pants. Got myself into some really humbling situations, not the fun kind. Perhaps you might say I got phased out. But I never left, so now I'm in high gear phasing back in.  I might try to explain ...

A few blog entries ago (this goes back to 2014) I wrote about how I needed to reset things, and how I needed to mull over what was going on at work. So here's a recap on my career path lately up till my previous job:

 

  1. In 2012 I was hired at Neudesic, a prominent consultancy firm. I dreamed of surrounding myself with smart people I could glean from. It turned out that things were pretty erratic at Neudesic. My first project assignment the client was Microsoft themselves. Prior to my joining they had just canned a project to build a big, beautiful Windows 8 themed web site where developers and other staff would post all-important articles and index them with Lucene.NET. Architecturally it was a disaster, and they canned the Microsoft executive at the same time they canned the project. So my joining the team, they were trying to salvage that project. I worked with them to try to dig into the code, get it up and running, figure out the performance issues, etc., and at some point I flew out to Redmond, Washington to discuss with the local Neudesic managers who were interfacing with Microsoft in person. Then I opened my mouth. I said, "Ya'll are really just makin' a blog. Why not add blog-ish things? Why reimplement Sharepoint?" Suddenly we were all fired. So then they shipped me off to Pulte Homes, where I did some work there, but the architect for that project and I didn't seem to know what we were doing on the user authorization detail and I proved blunt enough that they "couldn't afford" to keep me on after a couple months. So then they shipped me off to California to work with Ward & Brown on the Obamacare / ACA implementation there, again late to the party and annoyingly insistent to "help". But when QA and I asked around where the production/staging servers were, and they realized they apparently missed that part, they canned all their contractors. All 250 of us. So there I was stuck in the Neudesic-Phoenix office waiting for them to call me in. And they did. They laid me off. Like the child-man that I was, I accidentally let them catch me choking. They did say they might be able to take me back in a few months, though, so ...
  2. For the next few months I did some contract work for a local entrepeneur. I waited that out for a while and ultimately I wanted to go back to Neudesic, so ...
  3. In mid-year 2013 I asked Neudesic to take me back in. They did. They brought me back with open arms. It was more awkward for the lot of them than I or anyone anticipated. I'd made myself a reputation for being unrefined up to that point, I'm pretty sure. But anyway, when I arrived (a second time) I noticed that more people, including my former manager, were no longer there. They were laid off too. And I shipped back to Pulte Homes, on another, bigger project. I was up front with them, though, about what specific tools I was unacquainted with, specifically Bootstrap and the like, at the time. They shrugged that off, brought me in anyway. That project was led by--ima be frank here--a really, REALLY bitchy, control freakish, PMS-y woman. I tried to overlook it at the time. I tried to pretend it wasn't so at the time. I'm thinking back half a decade now, and I'm saying it like how I witnessed it. She was horrible. If I ever get in a situation like that again I'll run for the hills. That was awful. She was awful. But anyway, at the time, overlooking that (which I deliberately did), since the lead personality who'd previously been on that project but was laid off was now gone, and I wasn't getting much in the way of introduction, I deliberately, if bombastically, made a ton of assertions and at the same time asked a lot of really stupid, ignorant questions, which ticked off everyone on the team. So they canned me from the team. So that was fun. But what really did me in was I overheard the Neudesic chief developer director guy tell my boss that I "shouldn't be writing software". They assigned me some out-of-state ops role. I should have quit then and there; sure, I screwed up, but I'm a developer, not an ops guy. I didn't last long, and I ultimately did resign.
  4. So now in 2014 I got a very odd and delightfully educational gig with DeMark Analytics. So first of all, Tom DeMark, the owner, is a wealthy financial technician (a term I never knew before) who did what I imagined doing some years back--he studied financial charts and came up with artificially intelligent algorithms that predicted changes in the market. He's one among many people who come up with this stuff, but even so, that's what he did. So now they were building a product for people who could use these financial chart studies--a charting app, basically, with studies being overlaid over the chart. This entailed financial event data being siphoned in to both the back-end engine as well as to the client (the web browser). This is high tech stuff, and the pattern in use was CQRS-ES, which I'd never heard of much less mastered. So here's where it all broke down: 1) We all sat in close proximity to each other. No privacy. All conversations fully exposed. 2) I was hired as a lead, but I was wrong a couple times when I was adament, it was observed by the teammates, and so my authoritative experience as a lead developer was never trusted again. 3) Everyone in the company but maybe four of us was either family to Tom DeMark or close friends of Tom DeMark. It was a nepositic environment. I worked under one of Tom's sons (he was CEO/President), and another of Tom's sons worked under me--well, alongside me, since me being lead didn't work out. He was a bit spoiled. Quick on the Javascript, though, I couldn't keep up with his code, which was charting graphics. Like, at all. So I stopped trying. 4) My pay was adequate (best I can say about it) but my paycheck was cut monthly. Weird, bizarre, and painful. But it was worth it, I was investing in this, I figured. 5) My direct boss was a super high IQ ass. I asked technical questions about the middle/back end from him, like about how Cassandra was to be used, and instead of answering, he'd call everyone over--the CSS front-end developers, everyone. Then ask me to ask my question again. All to I guess humble me? .. Show me that this was "duh" knowledge that everyone would know and understand? So where this went critical was 6) I again used rhetoric. Crap, man. This pretty much got me fired. I used rhetoric! What I mean is, I insinuated, in so many words, that "your explanation sounds vague, and if I didn't know any better I'd say it almost sounds like you said [something obviously stupid]". Except I didn't say it like that, I actually, literally said, ".. you mean [something obviously stupid]?" I really, truly, genuinely trusted that he understood that I was being rhetorical, but this was the second time I made that mistake--I made the same mistake at Neudesic, "I can't read that [Javascript code]", I could, but I was getting at I shouldn't have to stop and read it--and with these guys it was sabotage. A few minutes later after we returned to our desks he threw his hands up and announced he was switching to Java/Scala. "Sorry Jon." Sorry, Jon? So yeah he was basically saying "you're fired, please quit." It took a couple more months, but I eventually did.
  5. So then I work at another consultancy firm. Solution Stream. Utah-based. They seemed to be trying to spread out into Phoenix. "I can do this," I thought. I learned at Best Software (Sage Software) when I moved to Arizona how consultants--consultants, not contractors--work, and bring prestige to the process of coming up with technical solutions and strategies, documenting them, and working them with the clients. But Solution Stream was really primarily just interested in creating contractors, apparently, but regardless, they didn't appreciate what I brought to the table, they undersold my capabilities, the executives decided they didn't like me, and they literally pulled me off a project that the client and my direct boss said I was doing great with and signed me over to Banner Health as a temp-to-hire (fire). I had no interest in being hired as a permanent Banner Health employee. When my temp-to-hire contract ended (the "temp-" part), everything ended. I swore off all consulting firms at that point. No more consulting firms. Never again.
  6. So then I got picked up at InEight. Bought out by major construction company Kiewit, InEight was building a cloud version of their Hard Dollar desktop app which manages large scale construction logistics (vendors, materials, supplies, etc). They had some workable plans and ideas. But things broke down real fast. Everyone who interviewed me quit within the year I joined, and it wasn't hard to see why. 1) They had non-technical people at the helm (senior leadership) making some very expensive and frustrating platform and architecture decisions. High performance software with minimal performance hosting, nothing worked, because they didn't want to spend the money for scaling the web tier. 2) The work was outsourced. Most of it was outsourced to India. Eventually they shipped some of it onshore to another midwestern state. But even onshore, most of their staff were H1-B visa holders. Foreigners. Nearly everyone I was working with was from India. Even after so many people quit, I stuck around as long as I could. But eventually I couldn't stand things anymore, my career path had become stagnant, and I knew I couldn't work with the senior executives (no one could, except people from India I guess). I was about to give two weeks notice, when those senior execs pulled me into a room and chewed me out for being "disrespectful". I was done. Never saw them or anyone over there again. (Actually, that's not entirely true; I have maintained strong friendship with at least one colleague from there. He got laid off a few months after I left. We're friends; we literally just met up this week.)
  7. For the last year and a half I've been working ... somewhere. For now. I came in as a lead developer, but they, too, openly declared me "disrespectful", so I've given up and just been a highly productive, proficient, heads-down programmer. This place, too, is mostly H1-B visa holders from India. I'm surrounded by foreigners. Scarcely a black, white, or Mexican face. It's depressing. I have less and less each year against Indians but my God, let me work with people of my own culture if I'm here in USA, just a few like-minded, like-raised friends, just a few? And now my team is getting dismantled, due to a third party taking over what we're doing. So I'm about to get laid off. If I'm not laid off, though, well, ... 1) as a contractor, I don't get paid holidays, I don't get paid vacations, and that was painful enough, but unexpectedly after my hire it turns out there is a mandatory two weeks unpaid leave during Christmas & New Year's, and that's unacceptable ($thousands of $dollars lost, not to mention depressing since I spend holidays alone, so yes, it's unacceptable). 2) I have been super comfortable, and super complacent, with little to gain in technical growth. It's been ASP.NET MVC with SQL Server and jQuery. And some .NET Core 2.2 and Razor Pages. Woo wee. *sigh* So yeah, I'm open to change, regardless of whether I get laid off.
There's my life story for the last seven years. Stupid, depressing, awful, I've been awful, I've let myself screw myself over time and time again. So here is my new strategy.

 
I have no one to blame, even where I've whined and complained, I have no one to blame for my life's frustrations but myself. It's part of the maturing process. I've embraced my learnings and I will carry on. I will try to let go of the past, but I only repeat and document them here because I have learned from them, and perhaps you can, too. What do I want in my career path? I miss the days when I was an innovator.

You guys remember AJAX? Yeah? I dreamed up AJAX in 1998 when IE4 came out. I called it "TelnetGUI". Stupid name. Other people came up with the same idea a few years later and earned the credit.

You guys remember Windows Live Writer? Yeah? ... Total rip-off of my PowerBlog app, down to detail. You could say I prototyped Windows Live Writer before Microsoft started working on Windows Live Writer. Microsoft even interviewed me after I built PowerBlog, because of PowerBlog and its Microsoft-minded inspirations of component integration. Jerk interviewer was like, "Wait, you mean you don't know C++?! Oh good grief, I thought you were a real programmer." Screw you, Microsoft interviewer. LOL. Anyway, Windows Live Writer came out a couple years later. Took all of PowerBlog's fundamental ideas, even down to the gleaning the CSS theme and injecting the theme into the editor.
 
You guys remember PowerShell? Yeah? I prototyped the idea in or around 2004. I took the ActiveScript COM object, put it in a C++ console container, spoonfed some commands where you could new-up some objects and work with them in a command-line shell, suggested that the sky's the limit if you integrate full-blown .NET CLR and shell commands in this, and showed it to the world on Microsoft's newsgroups. Microsoft was watching; I planted a seed. A year or so later, PowerShell ("Monad") was previewed to the world. I didn't do the dirty work of development of it, but I seeded an idea.
 
You guys remember jQuery UI? Yeah? I cobbled together a windowing plugin for jQuery a year or two before jQuery UI was released. It was called jqDialogForms. Pretty nifty, I thought, but heck, I never got to use it in production.  
 
In fact there's a lot of crap in my attic I recently dug out and up over at https://github.com/stimpy77/ancient-legacy (It really is crap. Nothing much to see.)
 
And, oh yeah, you guys remember Entity Framework, Magical Unicorn Edition? I, too, had been inspired by Fluent NHibernate, and I, too, was working on an ORM library I called Gemli [src]. Sadly, I ended up with a recursion nightmare I myself stopped being able to read, development slowed to a halt, and then suddenly Microsoft announced that EF Magical Unicorn Edition, and I observed that it did everything I was trying to do in Gemli plus 99x more. So that was a waste of time. Even so, that was mini-ORM-of-my-own-making #2 or #3. 
 
All of these micro-innovations and others are years old, created during times of passion and egotistical self-perception of brilliance. What happened?! I think we can all see what happened. My ego kept bulldozing my career. My social ineptitude vanquished my opportunities. And I got really, really lazy on the tech side.
 
My blog grew stagnant because, frankly, career errors aside, my bold and lengthy philisophical assertions in my blog articles were pretty wrong. Philosophies like, "design top down, implement bottom up". Says who? Why? I dunno. Seemed like an interesting case to make at the time. But then people at meetups said they knew my name, read my blog, quoted my article, and I curled up and squealed and said "oh gawd I had no idea what I was writing". (Actually I just nodded my head with a smile and blushed.) 
 
For the last few weeks I have spent, including study time, more than 70 hours a week, working. Working on hard skills growth. Working on side project development--brainstorming, planning. Working on fixing patchy things, like getting this blog up, so I can get into writing again. It's overdue for a replacement, but frankly I might just switch over to http://dev.to/ like all the young cool kids. 
 
Tech Things I Am Paying Attention To 
 
.NET Core 3 is where it's at, .NET 5 is going to be The Great .NET Redux's great arrival. However, the JVM has had a huge comeback over the last half-decade, and NodeJS and npm like squirrelly cats have been sticking their noses in everything. Big client-side Javascript libraries from a year or two ago (Facebook's React, Google's Angular, China's Vue) are now server-side for some dumb reason. Most importantly, software is becoming event-driven. IaaS is gone. PaaS is passe. Kubernetes is now standard, apparently. Microsoft's MSMQ is so 1990s, RabbitMQ so 00's, LinkedIn's Kafka is apparently where it's at, and now Yahoo!'s Pulsar is gaining noteriety for being even more performant. 
 
My day job being standard transactional web dev with ASP.NET/jQuery/SQL has made me bewilderingly ansy. If I want to continue to be competitive in complex software architecture and software development I've got to really go knee deep--no, neck deep--in React/Angular/Vue on the front-end, MongoDB, Hadoop, etc on data, Docker/Kubernetes on the platform, Kafka on the data transfer, CQRS+ES on the transaction cycles, DDD as the foundation to argue for it all, and books to explain it all. I need to go to college, and if I don't have time or money for that I need to be studying and reading and challenging myself at all hours I am free until I am confident as a resource for any of these roles.
 
Enough of the crap reputation of being a wannabe. Let's be. 

 

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

C# | Career | General Technology | Health and Wellness | Open Source | Opinion | Pet Projects | Social Networking | Software Development | Unresolvable

Technology Status Update 2016

by Jon Davis 10. July 2016 09:09

Hello, peephole. (people.) Just a little update. I've been keeping this blog online for some time, my most recent blog entries always so negative, I keep having to see that negativity every time I check to make sure the blog's up, lol. I'm tired of it so I thought I'd post something positive.

My current job is one I hope to keep for years and years to come, and if that doesn't work out I'll be looking for one just like it and try to keep it for years and years to come. I'm so done with contracting and consulting (except the occasional mentoring session on code mentor -dot- io). I'm still developing, of course, and as technology is changing, here's what's up as I see it. 

  1. Azure is relevant. 
    image
    The world really has shifted to cloud and the majority of companies, finally, are offloading their hosting to the cloud. AWS, Azure, take your pick, everyone who hates Microsoft will obviously choose AWS but Azure is the obvious choice for Microsoft stack folks, there is nothing meaningful AWS has that Azure doesn't at this point. The amount of stuff on Azure is sufficiently terrifying in quantity and supposed quality enough to give me a thrill. So I'm done with hating on Azure, after all their marketing and nagging and pushing, Microsoft has crossed a threshold of market saturation that I am adequately impressed. I guess that means I have to be an Azure fan, too, now. Fine. Yay Azure, woo. -.-
  2. ASP.NET is officially rebooted. 
    image
    So I hear this thing called ASP.NET Core 1.0 formerly known as ASP.NET 5 formerly known as ASP.NET vNext has RTM'd, and I hear it's like super duper important. It snuck by me, I haven't mastered it, but I know it enought to know a few things:
    • It's a total redux by means of redo. It's like the Star Trek reboot except it’s smaller and there are fewer planets it can manage, but it’s exactly like the Star Trek reboot in that it will probably implode yours.
    • If you've built your career on ASP.NET and you want to continue living on ASP.NET's laurals, now is not the time to master ASP.NET 1.0 Core. Give it another year or two to mature. 
    • If you're stuck on or otherwise fascinated by non-Microsoft operating systems, namely Mac and Linux, but you want to use the Microsoft programming stack, you absolutely must learn and master ASP.NET Core 1.0 and EF7.
    • If all you liked from ASP.NET Core 1.0 was the dynamic configs and build-time transpiles, you don't need ASP.NET Core for that LOL LOL ROFLMAO LOL LOL LOL *cough*
  3. The Aurelia Javascript framework is nearly ready.
    image
    Overall, Javascript framework trends have stopped. Companies are building upon AngularJS 1.x. Everyone who’s behind is talking about React as if it was new and suddenly newly relevant (it isn’t new anymore). Everyone still implementing Knockout are out of the loop and will die off soon enough. jQuery is still ubiquitous and yet ignored as a thing, but meanwhile it just turned v3.

    I don’t know what to think about things anymore. Angular 2.0 requires TypeScript, people hate TypeScript because they hate transpilers. People are still comparing TypeScript with CoffeeScript. People are dumb. If it wasn’t for people I might like Angular 2.0, and for that matter I’d be all over AureliaJS, which is much nicer but just doesn’t have Google as the titanic marketing arm. In the end, let’s just get stuff done, guys. Build stuff. Don’t worry about frameworks. Learn them all as you need them.
  4. Node.js is fading and yet slowly growing in relevance.
    image
    Do you remember .. oh heck unless you're graying probably not, anyway .. do you remember back in the day when the dynamic Internet was first loosed on the public and C/C++ and Perl were used to execute from cgi-bin, and if you wanted to add dynamic stuff to a web site you had to learn Perl and maybe find Perl pearls and plop them into your own cgi-bin? Yeah, no, I never really learned Perl, either, but I did notice the trend, but in the end, what did C/C++ and Perl mean to us up until the last decade? Answer: ubiquitous availability, but not web server functionality, just an ever-present availability for scripts, utilities, hacks, and whatever. That is where node.js is headed. Node.js for anything web related has become and will continue to be a gigantic mess of disorganized, everyone-is-equal, noisily integrated modules that sort of work but will never be as stable in built compositions as more carefully organized platforms. Frankly, I see node.js being more relevant as a workstation runtime than a server runtime. Right now I'm looking at maybe poking at it in a TFS build environment, but not so much for hosting things.
    I will always have a bitter taste in my mouth with node.js after trying to get socket.io integrated with Express and watching the whole thing just crumble, with no documentation or community help to resolve it, and this happened not just once on the job (never resolved before I walked away) but also during a code-mentor mentoring session (which we didn't figure out), even after a good year or so of maturity of the platform after the first instance. I still like node.js but will no longer be trying to build a career on it.
  5. Pay close attention and learn up on Swagger aka OpenAPI. 
    image
    Remember when -- oh wait, no, unless you're graying, .. nevermind .. anyway, -- once upon a time something called SOAP came out and it came with it a self-documentation feature that was a combination of WSDL and some really handy HTML generated scaffolding built into web services that would let you manually test SOAP-based services by filling out a self-generated form. Well now that JSON-based REST is the entirety of the playing field, we need the same self-documention. That's where Swagger came in a couple years ago and everyone uses it now. Swagger needs some serious overhauling--someone needs to come up with a Swagger-compliant UI built on more modular and configurable components, for example--but as a drop-in self-documentation feature for REST services it fits the bill.
    • Swagger can be had on .NET using a lib called Swashbuckle. If you use OData, there is a lib called Swashbuckle.OData. We use it very, very heavily where I work. (I was the one who found it and brought it in.) "Make sure it shows up and works in Swagger" is a requirement for all REST/OData APIs we build now.
    • Swagger is now OpenAPI but it's still Swagger, there are not yet any OpenAPI artifacts that I know of other than Swagger. Which is lame. Swagger is ugly. Featureful, but ugly, and non-modular.
    • Microsoft is listed as a contributing member of the OpenAPI committee, but I don't know what that means, and I don't see any generic output from OpenAPI yet. I'm worried that Microsoft will build a black box (rather than white box) Swagger-compliant alternative for ASP.NET Core.
    • Other curious ones to pay attention to, but which I don't see as significantly supported by the .NET community yet (maybe I haven't looked hard enough), are:
  6. OData v4 has potential but is implementation-heavy and sorely needs a v5. 
    image
    A lot of investments have been made in OData v4 as a web-based facade to Entity Framework data resources. It's the foundation of everything the team I'm with is working on, and I've learned to hate it. LOL. But I see its potential. I hope investments continue because it is sorely missing fundamental features like
    • MS OData needs better navigation property filtering and security checking, whether by optionally redirecting navigation properties to EDM-mapped controller routes (yes, taking a performance hit) or some other means
    • MS OData '/$count' breaks when [ODataRoute] is declared, boo.
    • OData spec sorely needs "DISTINCT" feature
    • $select needs to be smarter about returning anonymous models and not just eliminating fields; if all you want is one field in a nested navigation property in a nested navigation property (equivalent of LINQ's .Select(x=>new {ID=x.ID, DesiredField2=x.Child.Child2.DesiredField2}), in the OData result set you will have to dive into an array and then into an array to find the one desired field
    • MS OData output serialization is very slow and CPU-heavy
    • Custom actions and functions and making them exposed to Swagger via Swashbuckle.OData make me want to pull my hair out, it takes sometimes two hours of screaming and choking people to set up a route in OData where it would take me two minutes in Web API, and in the end I end up with a weird namespaced function name in the route like /OData/Widgets/Acme.GetCompositeThingmajig(4), there's no getting away from even the default namespace and your EDM definition must be an EXACT match to what is clearly obviously spelled out in the C# controller implementation or you die. I mean, if Swashbuckle / Swashbuckle.OData can mostly figure most of it out without making us dress up in a weird Halloween costume, surely Microsoft's EDM generator should have been able to.
  7. "Simple CRUD apps" vs "messaging-oriented DDD apps"
    has become the new Microsoft vs Linux or C# vs Java or SQL vs NoSQL. 

    imageThe war is really ugly. Over the last two or three years people have really been talking about how microservices and reaction-oriented software have turned the software industry upside down. Those who hop on the bandwagon are neglecting to know when to choose simpler tooling chains for simple jobs, meanwhile those who refuse to jump on the bandwagon are using some really harsh, cruel words to describe the trend ("idiots", "morons", etc). We need to learn to love and embrace all of these forms of software, allow them to grow us up, and know when to choose which pattern for which job.
    • Simple CRUD apps can still accomplish most business needs, making them preferable most of the time
      • .. but they don't scale well
      • .. and they require relatively very little development knowledge to build and grow
    • Non-transactional message-oriented solutions and related patterns like CQRS-ES scale out well but scale developers' and testers' comprehension very poorly; they have an exponential scale of complexity footprint, but for the thrill seekers they can be, frankly, hella fun and interesting so long as they are not built upon ancient ESB systems like SAP and so long as people can integrate in software planning war rooms.
    • Disparate data sourcing as with DDD with partial data replication is a DBA's nightmare. DBAs will always hate it, their opinions will always be biased, and they will always be right in their minds that it is wrong and foolish to go that route. They will sometimes be completely correct.

  8. Integrated functional unit tests are more valuable than TDD-style purist unit tests. That’s my new conclusion about developer testing in 2016. Purist TDD mindset still has a role in the software developer’s life. But there is still value in automated integration tests, and when things like Entity Framework are heavily in play, apparently it’s better to build upon LocalDB automation than Moq.
    At least, that’s what my current employer has forced me to believe. Sadly, the purist TDD mindset that I tried to adopt and bring to the table was not even slightly appreciated. I don’t know if I’m going to burn in hell for being persuaded out of a purist unit testing mindset or not. We shall see, we shall see.
  9. I'm hearing some weird and creepy rumors I don't think I like about SQL Server moving to Linux and eventually getting itself renamed. I don't like it, I think it's unnecessary. Microsoft should just create another product. Let SQL Server be SQL Server for Windows forever. Careers are built on such things. Bad Microsoft! Windows 8, .NET Framework version name fiascos, Lync vs Skype for Business, when will you ever learn to stop breaking marketing details to fix what is already successful??!
  10. Speaking of SQL Server, SQL Server 2016 is RTM'd, and full blown SSMS 2016 is free.
  11. On-premises TFS 2015 only just recently acquired gated check-in build support in a recent update. Seriously, like, what the heck, Microsoft? It's also super buggy, you get a nasty error message in Visual Studio while monitoring its progress. This is laughable.
    • Clear message from Microsoft: "If you want a premium TFS experience, Azure / Visual Studio Online is where you have to go." Microsoft is no longer a shrink-wrapped product company, they sell shrink wrapped software only for the legacy folks as an afterthought. They are hosted platform company now all the way. .
      • This means that Windows 10 machines including Nokia devices are moving to be subscription boxes with dumb client deployments. Boo.
  12. imageAnother rumor I've heard is that
    Microsoft is going to abandon the game industry.

    The Xbox platform was awesome because Microsoft was all in. But they're not all in anymore, and it shows, and so now as they look at their lackluster profits, what did they expect?
    • Microsoft: Either stay all-in with Xbox and also Windows 10 (dudes, have you seen Steam's Big Picture mode? no excuse!) or say goodbye to the consumer market forever. Seriously. Because we who thrive on the Microsoft platform are also gamers. I would recommend knocking yourselves over to partner with Valve to co-own the whole entertainment world like the oligarchies that both of you are since Valve did so well at keeping the Windows PC relevant to the gaming markets.

For the most part I've probably lived under a rock, I'm sure, I've been too busy enjoying my new 2016 Subaru WRX (a 4-door racecar) which I am probably going to sell in the next year because I didn't get out of debt first, but not before getting a Kawasaki Vulcan S ABS Café as my first motorized two-wheeler, riding that between playing Steam games, going camping, and exploring other ways to appreciate being alive on this planet. Maybe someday I'll learn to help the homeless and unfed, as I should. BTW, in the end I happen to know that "love God and love people" are the only two things that matter in life. The rest is fluff. But I'm so selfish, man do I enjoy fluff.  I feel like such a jerk. Those who know me know that I am one. God help me.

image2016-Kawasaki-Vulcan-S-ABS-Cafe1 
image  image
Top row: Fluff that doesn’t matter and distracts me from matters of substance.
Bottom row: Matters of substance.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

ASP.NET | Blog | Career | Computers and Internet | Cool Tools | LINQ | Microsoft Windows | Open Source | Software Development | Web Development | Windows

Announcing Fast Koala, an alternative to Slow Cheetah

by Jon Davis 17. July 2015 19:47

So this is a quick FYI for teh blogrollz that I have recently been working on a little Visual Studio extension that will do for web apps what Slow Cheetah refused to do. It enables build-time transformations for both web apps and for Windows apps and classlibs. 

Here's the extension: https://visualstudiogallery.msdn.microsoft.com/7bc82ddf-e51b-4bb4-942f-d76526a922a0  

Here's the Github: https://github.com/stimpy77/FastKoala

Either link will explain more.

Introducing XIO (xio.js)

by Jon Davis 3. September 2013 02:36

I spent the latter portion last week and the bulk of the holiday fleshing out the initial prototype of XIO ("ecks-eye-oh" or "zee-oh", I don't care at this point). It was intended to start out as an I/O library targeting everything (get it? X I/O, as in I/O for x), but that in turn forced me to make it a repository library with RESTful semantics. I still want to add stream-oriented functionality (WebSocket / long polling) to it to make it truly an I/O library. In the mean time, I hope people can find it useful as a consolidated interface library for storing and retrieving data.

You can access this project here: https://github.com/stimpy77/xio.js#readme

Here's a snapshot of the README file as it was at the time of this blog entry.



XIO (xio.js)

version 0.1.1 initial prototype (all 36-or-so tests pass)

A consistent data repository strategy for local and remote resources.

What it does

xio.js is a Javascript resource that supports reading and writing data to/from local data stores and remote servers using a consistent interface convention. One can write code that can be more easily migrated between storage locations and/or URIs, and repository operations are simplified into a simple set of verbs.

To write and read to and from local storage,

xio.set.local("mykey", "myvalue");
var value = xio.get.local("mykey")();

To write and read to and from a session cookie,

xio.set.cookie("mykey", "myvalue");
var value = xio.get.cookie("mykey")();

To write and read to and from a web service (as optionally synchronous; see below),

xio.post.mywebservice("mykey", "myvalue");
var value = xio.get.mywebservice("mykey")();

See the pattern? It supports localStorage, sessionStorage, cookies, and RESTful AJAX calls, using the same interface and conventions.

It also supports generating XHR functions and providing implementations that look like:

mywebservice.post("mykey", "myvalue");
var value = mywebservice.get("mykey")(); // assumes synchronous; see below
Optionally synchronous (asynchronous by default)

Whether you're working with localStorage or an XHR resource, each operation returns a promise.

When the action is synchronous, such as in working with localStorage, it returns a "synchronous promise" which is essentially a function that can optionally be immediately invoked and it will wrap .success(value) and return the value. This also works with XHR when async: false is passed in with the options during setup (define(..)).

The examples below are the same, only because XIO knows that the localStorage implementation of get is synchronous.

Aynchronous convention: var val; xio.get.local('mykey').success(function(v) { val = v; });

Synchronous convention: var val = xio.get.local('mykey')();

Generated operation interfaces

Whenever a new repository is defined using XIO, a set of supported verb and their implemented functions is returned and can be used as a repository object. For example:

var myRepository = xio.define('myRepository', { 
    url: '/myRepository?key={0}',
    methods: ["GET", "POST", "PUT", "DELETE"]
});

.. would populate the variable myRepository with:

{
    get: function(key) { /* .. */ },
    post: function(key, value) { /* .. */ },
    put: function(key, value) { /* .. */ },
    delete: function(key) { /* .. */ }
}

.. and each of these would return a promise.

XIO's alternative convention

But the built-in convention is a bit unique using xio[action][repository](key, value) (i.e.xio.post.myRepository("mykey", {first: "Bob", last: "Bison"}), which, again, returns a promise.

This syntactical convention, with the verb preceding the repository, is different from the usual convention of_object.method(key, value).

Why?!

The primary reason was to be able to isolate the repository from the operation, so that one could theoretically swap out one repository for another with minimal or no changes to CRUD code. For example,

var repository = "local"; // use localStorage for now; 
                          // replace with "my_restful_service" when ready 
                          // to integrate with the server
xio.post[repository](key, value).complete(function() {

    xio.get[repository](key).success(function(val) {
        console.log(val);
    });

});

Note here how "repository" is something that can move around. The goal, therefore, is to make disparate repositories such as localStorage and RESTful web service targets support the same features using the same interface.

As a bit of an experiment, this convention of xio[verb][repository] also seems to read and write a little better, even if it's a bit weird at first to see. The thinking is similar to the verb-target convention in PowerShell. Rather than taking a repository and working with it independently with assertions that it will have some CRUD operations available, the perspective is flipped and you are focusing on what you need to do, the verbs, first, while the target becomes more like a parameter or a known implementation of that operation. The goal is to dumb down CRUD operation concepts and repositories and refocus on the operations themselves so that, rather than repositories having an unknown set of operations with unknown interface styles and other features, instead, your standard CRUD operations, which are predictable, have a set of valid repository targets that support those operations.

This approach would have been entirely unnecessary and pointless if Javascript inherently supported interfaces, because then we could just define a CRUD interface and write all our repositories against those CRUD operations. But it doesn't, and indeed with the convention of closures and modules, it really can't.

Meanwhile, when you define a repository with xio.define(), as was described above and detailed again below, it returns an object that contains the operations (get(), post(), etc) that it supports. So if you really want to use the conventional repository[method](key, value) approach, you still can!

Download

Download here: https://raw.github.com/stimpy77/xio.js/master/src/xio.js

To use the whole package (by cloning this repository)

.. and to run the Jasmine tests, you will need Visual Studio 2012 and a registration of the .json file type with IIS / IIS Express MIME types. Open the xio.js.csproj file.

Dependencies

jQuery is required for now, for XHR-based operations, so it's not quite ready for node.js. This dependency requirement might be dropped in the future.

Basic verbs

See xio.verbs:

  • get(key)
  • set(key, value); used only by localStorage, sessionStorage, and cookie
  • put(key, data); defaults to "set" behavior when using localStorage, sessionStorage, or cookie
  • post(key, data); defaults to "set" behavior when using localStorage, sessionStorage, or cookie
  • delete(key)
  • patch(key, patchdata); implemented based on JSON/Javascript literals field sets (send only deltas)
Examples
// initialize

var xio = Xio(); // initialize a module instance named "xio"
localStorage
xio.set.local("my_key", "my_value");
var val = xio.get.local("my_key")();
xio.delete.local("my_key");

// or, get using asynchronous conventions, ..    
var val;
xio.get.local("my_key").success(function(v) 
    val = v;
});

xio.set.local("my_key", {
    first: "Bob",
    last: "Jones"
}).complete(function() {
    xio.patch.local("my_key", {
        last: "Jonas" // keep first name
    });
});
sessionStorage
xio.set.session("my_key", "my_value");
var val = xio.get.session("my_key")();
xio.delete.session("my_key");
cookie
xio.set.cookie(...)

.. supports these arguments: (key, value, expires, path, domain)

Alternatively, retaining only the xio.set["cookie"](key, value), you can automatically returned helper replacer functions:

xio.set["cookie"](skey, svalue)
    .expires(Date.now() + 30 * 24 * 60 * 60000))
    .path("/")
    .domain("mysite.com");

Note that using this approach, while more expressive and potentially more convertible to other CRUD targets, also results in each helper function deleting the previous value to set the value with the new adjustment.

session cookie
xio.set.cookie("my_key", "my_value");
var val = xio.get.cookie("my_key")();
xio.delete.cookie("my_key");
persistent cookie
xio.set.cookie("my_key", "my_value", new Date(Date.now() + 30 * 24 * 60 * 60000));
var val = xio.get.cookie("my_key")();
xio.delete.cookie("my_key");
web server resource (basics)
var define_result =
    xio.define("basic_sample", {
                url: "my/url/{0}/{1}",
                methods: [ xio.verbs.get, xio.verbs.post, xio.verbs.put, xio.verbs.delete ],
                dataType: 'json',
                async: false
            });
var promise = xio.get.basic_sample([4,12]).success(function(result) {
   // ..
});
// alternatively ..
var promise_ = define_result.get([4,12]).success(function(result) {
   // ..
});

The define() function creates a verb handler or route.

The url property is an expression that is formatted with the key parameter of any XHR-based CRUD operation. The key parameter can be a string (or number) or an array of strings (or numbers, which are convertible to strings). This value will be applied to the url property using the same convention as the typical string formatters in other languages such as C#'s string.Format().

Where the methods property is defined as an array of "GET", "POST", etc, for each one mapping to standard XIO verbs an XHR route will be internally created on behalf of the rest of the options defined in the options object that is passed in as a parameter to define(). The return value of define() is an object that lists all of the various operations that were wrapped for XIO (i.e. get(), post(), etc).

The rest of the options are used, for now, as a jQuery's $.ajax(..., options) parameter. The async property defaults to false. When async is true, the returned promise is wrapped with a "synchronous promise", which you can optionally immediately invoke with parens (()) which will return the value that is normally passed into .success(function (value) { .. }.

In the above example, define_result is an object that looks like this:

{
    get: function(key) { /* .. */ },
    post: function(key, value) { /* .. */ },
    put: function(key, value) { /* .. */ },
    delete: function(key) { /* .. */ }
}

In fact,

define_result.get === xio.get.basic_sample

.. should evaluate to true.

Sample 2:

var ops = xio.define("basic_sample2", {
                get: function(key) { return "value"; },
                post: function(key,value) { return "ok"; }
            });
var promise = xio.get["basic_sample2"]("mykey").success(function(result) {
   // ..
});

In this example, the get() and post() operations are explicitly declared into the defined verb handler and wrapped with a promise, rather than internally wrapped into XHR/AJAX calls. If an explicit definition returns a promise (i.e. an object with .success and .complete), the returned promise will not be wrapped. You can mix-and-match both generated XHR calls (with the url and methods properties) as well as custom implementations (with explicit get/post/etc properties) in the options argument. Custom implementations will override any generated implementations if they conflict.

web server resource (asynchronous GET)
xio.define("specresource", {
                url: "spec/res/{0}",
                methods: [xio.verbs.get],
                dataType: 'json'
            });
var val;
xio.get.specresource("myResourceAction").success(function(v) { // gets http://host_server/spec/res/myResourceAction
    val = v;
}).complete(function() {
    // continue processing with populated val
});
web server resource (synchronous GET)
xio.define("synchronous_specresources", {
                url: "spec/res/{0}",
                methods: [xio.verbs.get],
                dataType: 'json',
                async: false // <<==!!!!!
            });
var val = xio.get.synchronous_specresources("myResourceAction")(); // gets http://host_server/spec/res/myResourceAction
web server resource POST
xio.define("contactsvc", {
                url: "svcapi/contact/{0}",
                methods: [ xio.verbs.get, xio.verbs.post ],
                dataType: 'json'
            });
var myModel = {
    first: "Fred",
    last: "Flinstone"
}
var val = xio.post.contactsvc(null, myModel).success(function(id) { // posts to http://host_server/svcapi/contact/
    // model has been posted, new ID returned
    // validate:
    xio.get.contactsvc(id).success(function(contact) {  // gets from http://host_server/svcapi/contact/{id}
        expect(contact.first).toBe("Fred");
    });
});
web server resource (DELETE)
xio.delete.myresourceContainer("myresource");
web server resource (PUT)
xio.define("contactsvc", {
                url: "svcapi/contact/{0}",
                methods: [ xio.verbs.get, xio.verbs.post, xio.verbs.put ],
                dataType: 'json'
            });
var myModel = {
    first: "Fred",
    last: "Flinstone"
}
var val = xio.post.contactsvc(null, myModel).success(function(id) { // posts to http://host_server/svcapi/contact/
    // model has been posted, new ID returned
    // now modify:
    myModel = {
        first: "Carl",
        last: "Zeuss"
    }
    xio.put.contactsvc(id, myModel).success(function() {  /* .. */ }).error(function() { /* .. */ });
});
web server resource (PATCH)
xio.define("contactsvc", {
                url: "svcapi/contact/{0}",
                methods: [ xio.verbs.get, xio.verbs.post, xio.verbs.patch ],
                dataType: 'json'
            });
var myModel = {
    first: "Fred",
    last: "Flinstone"
}
var val = xio.post.contactsvc(null, myModel).success(function(id) { // posts to http://host_server/svcapi/contact/
    // model has been posted, new ID returned
    // now modify:
    var myModification = {
        first: "Phil" // leave the last name intact
    }
    xio.patch.contactsvc(id, myModification).success(function() {  /* .. */ }).error(function() { /* .. */ });
});
custom implementation and redefinition
xio.define("custom1", {
    get: function(key) { return "teh value for " + key};
});
xio.get.custom1("tehkey").success(function(v) { alert(v); } ); // alerts "teh value for tehkey";
xio.redefine("custom1", xio.verbs.get, function(key) { return "teh better value for " + key; });
xio.get.custom1("tehkey").success(function(v) { alert(v); } ); // alerts "teh better value for tehkey"
var custom1 = 
    xio.redefine("custom1", {
        url: "customurl/{0}",
        methods: [xio.verbs.post],
        get: function(key) { return "custom getter still"; }
    });
xio.post.custom1("tehkey", "val"); // asynchronously posts to URL http://host_server/customurl/tehkey
xio.get.custom1("tehkey").success(function(v) { alert(v); } ); // alerts "custom getter still"

// oh by the way,
for (var p in custom1) {
    if (custom1.hasOwnProperty(p) && typeof(custom1[p]) == "function") {
        console.log("custom1." + p); // should emit custom1.get and custom1.post
    }
}

Future intentions

WebSockets and WebRTC support

The original motivation to produce an I/O library was actually to implement a WebSockets client that can fallback to long polling, and that has no dependency upon jQuery. Instead, what has so far become implemented has been a standard AJAX interface that depends upon jQuery. Go figure.

If and when WebSocket support gets added, the next step will be WebRTC.

Meanwhile, jQuery needs to be replaced with something that works fine on nodejs.

Additionally, in a completely isolated parallel path, if no progress is made by the ASP.NET SignalR team to make the SignalR client freed from jQuery, xio.js might become tailored to be a somewhat code compatible client implementation or a support library for a separate SignalR client implementation.

Service Bus, Queuing, and background tasks support

At an extremely lightweight scale, I do want to implement some service bus and queue features. For remote service integration, this would just be more verbs to sit on top of the existing CRUD operations, as well as WebSockets / long polling / SignalR integration. This is all fairly vague right now because I am not sure yet what it will look like. On a local level, however, I am considering integrating with Web Workers. It might be nice to use XIO to manage deferred I/O via the Web Workers feature. There are major limitations to Web Workers, however, such as no access to the DOM, so I am not sure yet.

Other notes

If you run the Jasmine tests, make sure the .json file type is set up as a mime type. For example, IIS and IIS Express will return a 403 otherwise. Google reveals this: http://michaellhayden.blogspot.com/2012/07/add-json-mime-type-to-iis-express.html

License

The license for XIO is pending, as it's not as important to me as getting some initial feedback. It will definitely be an attribution-based license. If you use xio.js as-is, unchanged, with the comments at top, you definitely may use it for any project. I will drop in a license (probably Apache 2 or BSD or Creative Commons Attribution or somesuch) in the near future.

Canvas & HTML 5 Sample Junk

by Jon Davis 27. September 2012 15:48

Poking around with HTML 5 canvas again, refreshing my knowledge of the basics. Here's where I'm dumping links to my own tinkerings for my own reference. I'll update this with more list items later as I come up with them.

  1. Don't have a seizure. http://jsfiddle.net/8RYtu/22/
    HTML5 canvas arc, line, audio, custom web font rendered in canvas, non-fixed (dynamic) render loop with fps meter, window-scale, being obnoxious
  2. Pass-through pointer events http://jsfiddle.net/MtGT8/1/
    Demonstrates how the canvas element, which would normally intercept mouse events, does not do so here, and instead allows the mouse event to propagate to the elements behind it. Huge potential but does not work in Internet Explorer.
  3. Geolocation sample. http://jsfiddle.net/nmu3x/4/ 
    Nothing to do with canvas here. Get over it.
  4. ECMAScript 5 Javascript property getter/setter. http://jsfiddle.net/9QpnW/8/
    Like C#, Javascript now supports assigning functions to property getters/setters. See how I store a value privately (in a closure) and do bad by returning a modified value.

Esent: The Decade-Old Database Engine That Windows (Almost) Always Had

by Jon Davis 30. August 2010 03:08

Windows has a technology I just stumbled upon that should make a few *nix folks jealous. It’s called Esent. It’s been around since Windows 2000 and is still alive and strong in Windows 7.

What is Esent, you ask? It’s a database engine. No, not like SQL Server, it doesn’t do the T-SQL language. But it is an ISAM database, and it’s got several features of a top-notch database engine. According to this page,

ESENT is an embeddable, transactional database engine. It first shipped with Microsoft Windows 2000 and has been available for developers to use since then. You can use ESENT for applications that need reliable, high-performance, low-overhead storage of structured or semi-structured data. The ESENT engine can help with data needs ranging from something as simple as a hash table that is too large to store in memory to something more complex such as an application with tables, columns, and indexes.

Many teams at Microsoft—including The Active Directory, Windows Desktop Search, Windows Mail, Live Mesh, and Windows Update—currently rely on ESENT for data storage. And Microsoft Exchange stores all of its mailbox data (a large server typically has dozens of terrabytes of data) using a slightly modified version of the ESENT code.

Features
Significant technical features of ESENT include:

  • ACID transactions with savepoints, lazy commits, and robust crash recovery.
  • Snapshot isolation.
  • Record-level locking (multi-versioning provides non-blocking reads).
  • Highly concurrent database access.
  • Flexible meta-data (tens of thousands of columns, tables, and indexes are possible).
  • Indexing support for integer, floating point, ASCII, Unicode, and binary columns.
  • Sophisticated index types, including conditional, tuple, and multi-valued.
  • Columns that can be up to 2GB with a maximum database size of 16TB.

Note: The ESENT database file cannot be shared between multiple processes simultaneously. ESENT works best for applications with simple, predefined queries; if you have an application with complex, ad-hoc queries, a storage solution that provides a query layer will work better for you.

Wowza. I need 16TB databases, I use those all the time. LOL.

My path to stumbling upon Esent was first by looking at RavenDB, which is rumored to be built upon Esent as its storage engine. Searching for more info on Esent, I came across ManagedEsent, which provides a crazy-cool PersistentDictionary and exposes the native Esent with an API wrapper.

To be quite honest, the Jet-prefixed API points look to be far too low-level for my interests, but some of the helper classes are definitely a step in the right direction in making this API more C#-like.

I’m particularly fascinated, however, by the PersistentDictionary. It’s a really neat, simple way to persist ID’d serializable objects to the hard drive very efficiently. Unfortunately it is perhaps too simple; it does not do away with NoSQL services that provide rich document querying and indexing.

Looks like someone over there at Microsoft who plays with Esent development is blogging: http://blogs.msdn.com/b/laurionb/

Currently rated 4.1 by 8 people

  • Currently 4.125/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Microsoft Windows | Software Development

Four Methods Of Simple Caching In .NET

by Jon Davis 30. August 2010 01:07

Caching is a fundamental component of working software. The role of any cache is to decrease the performance footprint of data I/O by moving it as close as possible to the execution logic. At the lowest level of computing, a thread relies on a CPU cache to be loaded up each time the thread context is switched. At higher levels, caches are used to offload data from a database into a local application memory store or, say, a Lucene directory for fast indexing of data.

Although there are some decent hashtable-become-dictionary scenarios in .NET as it has evolved, until .NET 4.0 there was never a general purpose caching table built directly into .NET. By “caching table” I mean a caching mechanism where keyed items get put into it but eventually “expire” and get deleted, whether by:

a) a “sliding” expiration whereby the item expires in a timespan-determined amount of time from the time it gets added,

b) an explicit DateTime whereby the item expires as soon as that specific DateTime passes, or

c) the item is prioritized and is removed from the collection when the system needs to free up some memory.

There are some methods people can use, however.

Method One: ASP.NET Cache

ASP.NET, or the System.Web.dll assembly, does have a caching mechanism. It was never intended to be used outside of a web context, but it can be used outside of the web, and it does perform all of the above expiration behaviors in a hashtable of sorts.

After scouring Google, it appears that quite a few people who have discussed the built-in caching functionality in .NET have resorted to using the ASP.NET cache in their non-web projects. This is no longer the most-available, most-supported built-in caching system in .NET; .NET 4 has an ObjectCache which I’ll get into later. Microsoft has always been adamant that the ASP.NET cache is not intended for use outside of the web. But many people are still stuck in .NET 2.0 and .NET 3.5, and need something to work with, and this happens to work for many people, even though MSDN says clearly:

Note:

The Cache class is not intended for use outside of ASP.NET applications. It was designed and tested for use in ASP.NET to provide caching for Web applications. In other types of applications, such as console applications or Windows Forms applications, ASP.NET caching might not work correctly.

The class for the ASP.NET cache is System.Web.Caching.Cache in System.Web.dll. However, you cannot simply new-up a Cache object. You must acquire it from System.Web.HttpRuntime.Cache.

Cache cache = System.Web.HttpRuntime.Cache;

Working with the ASP.NET cache is documented on MSDN here.

Pros:

  1. It’s built-in.
  2. Despite the .NET 1.0 syntax, it’s fairly simple to use.
  3. When used in a web context, it’s well-tested. Outside of web contexts, according to Google searches it is not commonly known to cause problems, despite Microsoft recommending against it, so long as you’re using .NET 2.0 or later.
  4. You can be notified via a delegate when an item is removed, which is necessary if you need to keep it alive and you could not set the item’s priority in advance.
  5. Individual items have the flexibility of any of (a), (b), or (c) methods of expiration and removal in the list of removal methods at the top of this article. You can also associate expiration behavior with the presence of a physical file.

Cons:

  1. Not only is it static, there is only one. You cannot create your own type with its own static instance of a Cache. You can only have one bucket for your entire app, period. You can wrap the bucket with your own wrappers that do things like pre-inject prefixes in the keys and remove these prefixes when you pull the key/value pairs back out. But there is still only one bucket. Everything is lumped together. This can be a real nuisance if, for example, you have a service that needs to cache three or four different kinds of data separately. This shouldn’t be a big problem for pathetically simple projects. But if a project has any significant degree of complexity due to its requirements, the ASP.NET cache will typically not suffice.
  2. Items can disappear, willy-nilly. A lot of people aren’t aware of this—I wasn’t, until I refreshed my knowledge on this cache implementation. By default, the ASP.NET cache is designed to destroy items when it “feels” like it. More specifically, see (c) in my definition of a cache table at the top of this article. If another thread in the same process is working on something completely different, and it dumps high-priority items into the cache, then as soon as .NET decides it needs to require some memory it will start to destroy some items in the cache according to their priorities, lower priorities first. All of the examples documented here for adding cache items use the default priority, rather than the NotRemovable priority value which keeps it from being removed for memory-clearing purposes but will still remove it according to the expiration policy. Peppering CacheItemPriority.NotRemovable in cache invocations can be cumbersome, otherwise a wrapper is necessary.
  3. The key must be a string. If, for example, you are caching data records where the records are keyed on a long or an integer, you must convert the key to a string first.
  4. The syntax is stale. It’s .NET 1.0 syntax, even uglier than ArrayList or Hashtable. There are no generics here, no IDictionary<> interface. It has no Contains() method, no Keys collection, no standard events; it only has a Get() method plus an indexer that does the same thing as Get(), returning null if there is no match, plus Add(), Insert() (redundant?), Remove(), and GetEnumerator().
  5. Ignores the DRY principle of setting up your default expiration/removal behaviors so you can forget about them. You have to explicitly tell the cache how you want the item you’re adding to expire or be removed every time you add add an item.
  6. No way to access the caching details of a cached item such as the timestamp of when it was added. Encapsulation went a bit overboard here, making it difficult to use the cache when in code you’re attempting to determine whether a cached item should be invalidated against another caching mechanism (i.e. session collection) or not.
  7. Removal events are not exposed as events and must be tracked at the time of add.
  8. And if I haven’t said it enough, Microsoft explicitly recommends against it outside of the web. And if you’re cursed with .NET 1.1, you not supposed to use it with any confidence of stability at all outside of the web so don’t bother.

Method Two: The Enterprise Library Caching Application Block

Microsoft’s recommendation, up until .NET 4.0, has been that if you need a caching system outside of the web then you should use the Enterprise Library Caching Application Block.

The Microsoft Enterprise Library is a coordinated effort between Microsoft and a third party technology partner company Avanade, mostly the latter. It consists of multiple meet-many-needs general purpose technology solutions, some of which are pretty nice but many of which are in an outdated format such as a first-generation O/RM that doesn’t support LINQ.

I have personally never used the EntLib Caching App Block. It looks bloated. Others on the web have commented that they thought it was bloated, too, but once they started using it they saw that it was pretty simple and straightfoward. I personally am not sold, but since I haven’t tried it I cannot pass a fair judgment. So for whatever its worth, here are some pros/cons:

Pros:

  1. Recommended by Microsoft as the cache mechanism for non-web projects.
  2. The syntax looks to be familiar to those who were using the ASP.NET cache.

Cons:

  1. Not built-in. The assemblies must be downloaded and separately referenced.
  2. Not actually created by Microsoft. (EntLib is an Avanade solution that Microsoft endorses as its own.)
  3. The syntax is the same .NET 1.0 style stuff that I believe defaults to normal priority (auto-ejects the items when memory management feels like it) rather than NotRemovable priority, and does not reinforce the DRY principles of setting up your default expiration/removal behaviors so you can forget about them.
  4. Potentially bloated.

Method Three: .NET 4.0’s ObjectCache / MemoryCache

Microsoft finally implemented an abstract ObjectCache class in the latest version of the .NET Framework, and a MemoryCache implementation that inherits and implements ObjectCache for in-memory purposes in a non-web setting.

System.Runtime.Caching.ObjectCache is in the System.Runtime.Caching.dll assembly. It is an abstract class that that declares basically the same .NET 1.0 style interfaces that are found in the ASP.NET cache. System.Runtime.Caching.MemoryCache is the in-memory implementation of ObjectCache and is very similar to the ASP.NET cache, with a few changes.

To add an item with a sliding expiration, your code would look something like this:

var config = new NameValueCollection(); 
var cache = new MemoryCache("myMemCache", config); 
cache.Add(new CacheItem("a", "b"), 
new CacheItemPolicy 
{ 
Priority = CacheItemPriority.NotRemovable, 
SlidingExpiration=TimeSpan.FromMinutes(30) 
});

Pros:

  1. It’s built-in, and now supported and recommended by Microsoft outside of the web.
  2. Unlike the ASP.NET cache, you can instantiate a MemoryCache object instance.
    Note: It doesn’t have to be static, but it should be—that is Microsoft’s recommendation (see yellow Caution).
  3. A few slight improvements have been made vs. the ASP.NET cache’s interface, such as the ability to subscribe to removal events without necessarily being there when the items were added, the redundant Insert() was removed, items can be added with a CacheItem object with an initializer that defines the caching strategy, and Contains() was added.

Cons:

  1. Still does not fully reinforce DRY. From my small amount of experience, you still can’t set the sliding expiration TimeSpan once and forget about it. And frankly, although the policy in the item-add sample above is more readable, it necessitates horrific verbosity.
  2. It is still not generically-keyed; it requires a string as the key. So you can’t store as long or int if you’re caching data records, unless you convert to string.

Method Four: Build One Yourself

It’s actually pretty simple to create a caching dictionary that performs explicit or sliding expiration. (It gets a lot harder if you want items to be auto-removed for memory-clearing purposes.) Here’s all you have to do:

  1. Create a value container class called something like Expiring<T> or Expirable<T> that would contain a value of type T, a TimeStamp property of type DateTime to store when the value was added to the cache, and a TimeSpan that would indicate how far out from the timestamp that the item should expire. For explicit expiration you can just expose a property setter that sets the TimeSpan given a date subtracted by the timestamp.
  2. Create a class, let’s call it ExpirableItemsDictionary<K,T>, that implements IDictionary<K,T>. I prefer to make it a generic class with <K,T> defined by the consumer.
  3. In the the class created in #2, add a Dictionary<K,Expiring<T>> as a property and call it InnerDictionary.
  4. The implementation if IDictionary<K,T> in the class created in #2 should use the InnerDictionary to store cached items. Encapsulation would hide the caching method details via instances of the type created in #1 above.
  5. Make sure the indexer (this[]), ContainsKey(), etc., are careful to clear out expired items and remove the expired items before returning a value. Return null in getters if the item was removed.
  6. Use thread locks on all getters, setters, ContainsKey(), and particularly when clearing the expired items.
  7. Raise an event whenever an item gets removed due to expiration.
  8. Add a System.Threading.Timer instance and rig it during initialization to auto-remove expired items every 15 seconds. This is the same behavior as the ASP.NET cache.
  9. You may want to add an AddOrUpdate() routine that pushes out the sliding expiration by replacing the timestamp on the item’s container (Expiring<T> instance) if it already exists.

Microsoft has to support its original designs because its user base has built up a dependency upon them, but that does not mean that they are good designs.

Pros:

  1. You have complete control over the implementation.
  2. Can reinforce DRY by setting up default caching behaviors and then just dropping key/value pairs in without declaring the caching details each time you add an item.
  3. Can implement modern interfaces, namely IDictionary<K,T>. This makes it much easier to consume as its interface is more predictable as a dictionary interface, plus it makes it more accessible to helpers and extension methods that work with IDictionary<>.
  4. Caching details can be unencapsulated, such as by exposing your InnerDictionary via a public read-only property, allowing you to write explicit unit tests against your caching strategy as well as extend your basic caching implementation with additional caching strategies that build upon it.
  5. Although it is not necessarily a familiar interface for those who already made themselves comfortable with the .NET 1.0 style syntax of the ASP.NET cache or the Caching Application Block, you can define the interface to look like however you want it to look.
  6. Can use any type for keys. This is one reason why generics were created. Not everything should be keyed with a string.

Cons:

  1. Is not invented by, nor endorsed by, Microsoft, so it is not going to have the same quality assurance.
  2. Assuming only the instructions I described above are implemented, does not “willy-nilly” clear items for clearing memory on a priority basis (which is a corner-case utility function of a cache anyway .. BUY RAM where you would be using the cache, RAM is cheap).

     

Among all four of these options, this is my preference. I have implemented this basic caching solution. So far, it seems to work perfectly, there are no known bugs (please contact me with comments below or at jon-at-jondavis if there are!!), and I intend to use it in all of my smaller side projects that need basic caching. Here it is: 

Download: ExpirableItemDictionary.zip

Worthy Of Mention: AppFabric, NoSQL, Et Al

Notice that the title of this blog article indicates “Simple Caching”, not “Heavy-Duty Caching”. If you want to get into the heavy-duty stuff, you should look at memcached, AppFabric, ScaleOut, or any of the many NoSQL solutions that are shaking up the blogosphere and Q&A forums.

By “heavy duty” I mean scaled-out solutions, as in scaled across multiple servers. There are some non-trivial costs associated with scaling out

Scott Hanselman has a blog article called, “Installing, Configuring and Using Windows Server AppFabric and the ‘Velocity’ Memory Cache in 10 Minutes”. I already tried installing AppFabric, it was a botched and failed job, but I plan to try to tackle it again, now that hopefully Microsoft has updated their online documentation and provided us with some walk-throughs, i.e. Scott’s.

As briefly hinted in the introduction of this article, another approach to caching is using something like Lucene.Net to store database data locally. Lucene calls itself a “search engine”, but really it is a NoSQL [Wikipedia link] storage mechanism in the form of a queryable index of document-based tables (called “directories”).

Other NoSQL options abound, most of them being Linux-oriented like CouchDB (which runs but stinks on Windows) but not all of them. For example, there’s RavenDB from Oren Eini (author of the popular ayende.com blog) which is built entirely in and for .NET as a direct competitor to the likes of Lucene.net. There are a whole bunch of other NoSQL options listed over here.

I have had the pleasure of working with Lucene.net and the annoyance of poking at CouchDB on Windows (Linux folks should not write Windows software, bleh .. learn how to write Windows services, folks), but not much else. Perhaps I should take a good look at RavenDB next.

 

Currently rated 3.6 by 20 people

  • Currently 3.55/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

C# | Software Development

I Don’t Much Get Go

by Jon Davis 24. August 2010 01:53

When Google announced their new Go programming language, I was quite excited and happy. Yay, another language to fix all the world’s problems! No more suckage! Suckage sucks! Give me a good language that doesn’t suffer suckage, so that my daily routine can suck less!

And Google certainly presented Go as “a C++ fixxer-upper”. I just watched this video of Rob Pike describing the objectives of Go, and I can definitely say that Google’s mantra still lines up. The video demonstrates a lot of evils of C++. Despite some attempts at self-edumacation, I personally cannot read nor write real-world C++ because I get lost in the gobbligook that he demonstrated.

But here’s my comment on YouTube:

100% of the examples of "why C++ and Java suck" are C++. Java's not anywhere near that bad. Furthermore, The ECMA open standard language known as C#--brought about by a big, pushy company no different in this space than Google--already had the exact same objectives as Java and now Go had, and it has actually been fundamentally evolving at the core, such as to replace patterns with language features (as seen in lambdas and extension methods). Google just wanted to be INDEPENDENT, typical anti-MS.

While trying to take Go in with great excitement, I’ve been forced to conclude that Google’s own message delivery sucks, mainly by completely ignoring some of the successful languages in the industry—namely C# (as well as some of the lesser-used excellent languages out there)—much like Microsoft’s message delivery of C#’s core objectives somewhat sucked by refraining from mentioning Java even once when C# was announced (I spent an hour looking for the C# announcement white paper from 2000/2001 to back up this memory but I can’t find it). The video linked above doesn’t even show a single example of Java suckage; it just made these painful accusations of Java being right in there with C++ as being a crappy language to work with. I haven’t coded in Java in about a decade, honestly, but back then Java was the shiznat and code was beautiful, elegant, and more or less easy to work with.

Meanwhile, Java has been evolving in some strange ways and ultimately I find it far less appetizing than C#. But where is Google’s nod to C#? Oh that’s right, C# doesn’t exist, it’s a fragment of someone’s imagination because Google considers Microsoft (C#’s maintainer) a competitor, duh. This is an attitude that should make anyone automatically skeptical of the language creator’s true intentions, and therefore of the language itself. C# actually came about in much the same way as Go did as far as trying to “fix” C++. In fact, most of the problems Go describes of C++ were the focus of C#’s objectives, along with a few thousand other objectives. Amazingly, C# has met most of its objectives so far.

If we break down Google’s objectives themselves, we don’t see a lot of meat. What we find, rather, are Google employees trying to optimize their coding workflow for previously C++ development efforts using perhaps emacs or vi (Rob even listed IDEs as a failure in modern languages). Their requirements in Go actually appear to be rather trivial. It seems that they want to write quick-and-easy C-syntax-like code that doesn’t get in the way of their business objectives, that performs very fast, and fast compilation that lets them escape out of vi to invoke gcc or whatever compiler very quickly and go back to coding. These are certainly great nice-to-haves, but I’m pretty sure that’s about it.

Consider, in contrast, .NET’s objectives a decade ago, .NET being at the core of applied C# as C# runs on the CLR (the .NET runtime):

  • To provide a very high degree of language interoperability
    • Visual Basic and C++ and Java, oh my! How do we get them to talk to each other with high performance?
    • COM was difficult to swallow. It didn’t suck because its intentions were gorgeous—to have a language-netural marshalling paradigm between runtimes—but then the same objectives were found in CORBA, and that sucked.
    • Go doesn’t even have language interoperability. It has C (and only C) function invocators. Bleh! Google is not in the real world!
  • To provide a runtime environment that completely manages code execution
    • This in itself was not a feature, it was a liability. But it enabled a great deal, namely consolidating QA resources for low-level functionality, which in turn brought about instantaneous quality and productivity on Microsoft’s part across the many languages and the tools because fewer resources had to focus on duplicate details.
    • The Mono runtime can run a lot of languages now. It is slower than C++, but not by a significant level. A C# application, fully ngen’d (precompiled to machine-level code), will execute at roughly 90-95% of C++’s and thus theoretically Go’s performance, which frankly is pretty darn good.
  • To provide a very simple software deployment and versioning model
    • A real-world requirement which Google in its corporate and web sandboxes is oblivious to, I’m not sure that Go even has a versioning model
  • To provide high-level code security through code access security and strong type checking
    • Again, a real-world requirement which Google in its corporate and web sandboxes is oblivious to, since most of their code is only exposed to the public via HTML/REST/JSON/SOAP.
  • To provide a consistent object-oriented programming model
    • It appears that Go is not an OOP language. There is no class support in Go. No objects at all, really. Just primitives, arrays, and structs. Surpriiiiise!! :D
  • To facilitate application communication by using industry standards such as SOAP and XML.
  • To simplify Web application development
    • I really don’t see Google innovating here, instead they push Python and Java on their app cloud? I most definitely don’t see this applying to Go at all.
  • To support hardware independence and portability
    • Although the implementation of this (JIT) is a liability, the objective is sound. Old-skool Linux folks didn’t get this; it’s stupid to have to recompile an application’s distribution, software should be precompiled.
    • Java and .NET are on near-equal ground here. When Java originally came about, it was the silver bullet for “Write Once, Run Anywhere”. With the successful creation and widespread adoption of the Mono runtime, .NET has the same portability. Go, however, requires recompilation. Once again, Google is not out in the real world, they live in a box (their headquarters and their exposed web).

And with the goals of C#,

  • C# language is intended to be a simple, modern, general-purpose, object-oriented programming language.
    • Go: “OOP is cruft.”
  • The language, and implementations thereof, should provide support for software engineering principles such as strong type checking, array bounds checking, detection of attempts to use uninitialized variables, and automatic garbage collection. Software robustness, durability, and programmer productivity are important.
    • Go: “Um, check, maybe. Especially productivity. Productivity means clean code.”
    • (As I always say, the more you know, the more you realize how little you know. Clearly you think you’ve got it all down, little Go.)
  • The language is intended for use in developing software components suitable for deployment in distributed environments.
    • Go: “Yeah we definitely want that. We’re Google.”
  • Source code portability is very important, as is programmer portability, especially for those programmers already familiar with C and C++.
    • Go: “Just forget C++. It’s bad. But the core syntax (curly braces) is much the same, so ... check!”
  • Support for internationalization is very important.
  • C# is intended to be suitable for writing applications for both hosted and embedded systems, ranging from the very large that use sophisticated operating systems, down to the very small having dedicated functions.
    • Go: “Check!”
    • (Yeah, except that Go isn’t an applications platform. At all. So, no. Uncheck that.)
  • Although C# applications are intended to be economical with regard to memory and processing power requirements, the language was not intended to compete directly on performance and size with C or assembly language.

Right now, Go just looks like a syntax with a few basic support classes for I/O and such. I must confess I was somewhat unimpressed by what I saw at Go’s web site (http://golang.org/) because the language does not look like much of a readability / maintainability improvement to what Java and C# offered up.

  • Go supposedly offers up memory management, but still heavily uses pointers. (C# supports pointers, too, by the way, but since pointers are not safe you must declare your code as “containing unsafe code”. Most C# code strictly uses type-checked references.)
  • Go eliminates semicolons as statement terminators. “…they are inserted automatically at the end of every line that looks like the end of a statement…” Sorry, but semicolons did not make C++ unreadable or unmaintainable
    Personally I think code without punctuation (semicolons) looks like English grammar without punctuations (no period)
    You end up with what look like run-on sentences
    Of course they’re not run-on sentences, they’re just lazily written ones with poor grammar
    wat next, lolcode?
  • “{Sample tutorial code} There is no implicit this and the receiver variable must be used to access members of the structure.” Wait, what, what? Hey, I have an idea, let’s make all functions everywhere static!
  • Actually, as far as I can tell, Go doesn’t have class support at all. It just has primitives, arrays, and structs.
  • Go uses the := operator syntax rather than the = operator for assignment. I suppose this would help eliminate the issue where people would type = where they meant to type == and destroy their variables.
  • Go has a nice “defer” statement that is akin to C#’s using() {} and try...finally blocks. It allows you to be lazy and disorganized such that late-executed code that should called after immediate code doesn’t require putting it below immediate code, we can just sprinkle late-executed code in as we go. We really needed that. (Except, not.) I think defer’s practical applicability is for some really lightweight AOP (Aspect Oriented Programming) scenarios, except that defer is a horrible approach to it.
  • Go has both new() and make(). I feel like I’m learning C++ again. It’s about those pesky pointers ...
    • Seriously, how the heck is
       
        var p *[]int = new([]int) // allocates slice structure; *p == nil; rarely useful
        var v  []int = make([]int, 100) // the slice v now refers to a new array of 100 ints

       
      .. a better solution to “improving upon” C++ with a new language than, oh I don’t know ..
       
        int[] p = null; // declares an array variable; p is null; rarely useful
        var v = new int[100]; // the variable v now refers to a new array of 100 ints

       
      ..? I’m sure I’m missing something here, particularly since I don’t understand what a “slice” is, but I suspect I shouldn’t care. Oh, nevermind, I see now that it “is a three-item descriptor containing a pointer to the data (inside an array), the length, and the capacity; until those items are initialized, the slice is nil.” Great. More pointer gobbligook. C# offers richly defined System.Array and all this stuff is transparent to the coder who really doesn’t need to know that there are pointers, somewhere, associated with the reference to your array, isn’t that the way it all should be? Is it really necessary to have a completely different semantic (new() vs. make())? Ohh yeah. The frickin pointer vs. the reference.
  • I see Go has a fmt.Printf(), plus a fmt.Fprintf(), plus a fmt.Sprintf(), plus Print() plus Println(). I’m beginning to wonder if function overloading is missing in Go. I think it is; http://golang.org/search?q=overloading
  • Go has “goroutines”. It’s basically, “go func() { /* do stuff */ }” and it will execute the code as a function on the fly, in parallel. In C# we call these anonymous delegates, and delegates can be passed along to worker thread pool threads on the fly with only one line of code, so yes, it’s supported. F# (a young .NET sibling of C#) has this, too, by the way, and its support for inline anonymous delegate declarations and spawning them off in parallel is as good as Go’s.
  • Go has channels for communication purposes. C# has WCF for this which is frankly a mess. The closest you can get to Go on the CLR as far as channels go is Axum, which is variation of C# with rich channel support.
  • Go does not throw exceptions. It panics, from which it might recover.

While I greatly respect the contributions Google has made to computing science, and their experience in building web-scalable applications (that, frankly, typically suck at a design level when they aren’t tied to the genius search algorithms), and I have no doubt that Google is an experienced web application software developer with a lot of history, honestly I think they are clueless when it comes to real-world applications programming solutions. Microsoft has been demonized the world over since its beginnings, but one thing they and few others have is some serious, serious real-world experience with applications. Between all of the web sites and databases and desktop applications combined everywhere on planet Earth through the history of man, Microsoft has probably been responsible for the core applications plumbing for the majority of it all, followed perhaps by Oracle. (Perhaps *nix and applications and services that run on it has been the majority; if nothing else, Microsoft has most certainly still had the lead in software as a company, to which my point is targeted.)

It wasn’t my intention to make this a Google vs. Microsoft debate, but frankly the fact that Go presentations neglect C# severely causes question to Go’s trustworthiness.

In my opinion, a better approach to what Google was trying to do with Go would be to take a popular language, such as C#, F#, or Axum, and break it away from the language’s implementation libraries, i.e. the .NET platform’s BCL, replacing them with the simpler constructs, support code, and lightweight command-line tooling found in Go, and then wrap the compiler to force it to natively compile to machine code (ngen). Honestly, I think that would be both a) a much better language and runtime than Go because it would offer most of the benefits of Go but in a manner that retains most or all of the advantages of the selected runtime (i.e. the CLR’s and C#’s multitude of advantages over C/C++), but also b) a flop, and a waste of time, because C# is not really broken. Coupled with F#, et al, our needs are quite well met. So thanks anyway, Google, but, really, you should go now.

Currently rated 3.2 by 6 people

  • Currently 3.166667/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , ,

C# | F# | General Technology | Mono | Opinion | Software Development

Gemli.Data: Basic LINQ Support

by Jon Davis 6. June 2010 04:13

In the Development branch of Gemli.Data, I finally got around to adding some very, very basic LINQ support. The following test scenarios currently seem to function correctly:

var myCustomEntityQuery = new DataModelQuery<DataModel<MockObject>>();
// Scenarios 1-3: Where() lambda, boolean operator, method exec
var linqq = myCustomEntityQuery.Where(mo => mo.Entity.MockStringValue == "dah");
linqq = myCustomEntityQuery.Where(mo => mo.Entity.MockStringValue != "dah");
// In the following scenario, GetPropertyValueByColumnName() is an explicitly supported method
linqq = myCustomEntityQuery.Where(mo=>((int)mo.GetPropertyValueByColumnName("customentity_id")) > -1);
// Scenario 4: LINQ formatted query
var q = (from mo in myCustomEntityQuery
where mo.Entity.MockStringValue != "st00pid"
select mo) as DataModelQuery<DataModel<MockObject>>;
// Scenario 5: LINQ formatted query execution with sorted ordering
var orderedlist = (from mo in myCustomEntityQuery
where mo.Entity.MockStringValue != "def"
orderby mo.Entity.MockStringValue
select mo).ToList();
// Scenario 6: LINQ formatted query with multiple conditions and multiple sort members
var orderedlist = (from mo in myCustomEntityQuery
where mo.Entity.MockStringValue != "def" && mo.Entity.ID < 3
orderby mo.Entity.ID, mo.Entity.MockStringValue
select mo).ToList();

This is a great milestone, one I’m very pleased with myself for finally accomplishing. There’s still a ton more to do but these were the top 50% or so of LINQ support scenarios needed in Gemli.Data.

Unfortunately, adding LINQ support brought about a rather painful rediscovery of critical missing functionality: the absence of support for OR (||) and condition groups in Gemli.Data queries. *facepalm*  I left it out earlier as a to-do item but completely forgot to come back to it. That’s next on my plate. *burp*


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  May 2020  >>
MoTuWeThFrSaSu
27282930123
45678910
11121314151617
18192021222324
25262728293031
1234567

View posts in large calendar

RecentPosts