Fediverse Microblogging Protocols: Part 1

by Jon Davis 3. November 2019 15:19

You may or may not have heard of "the Fediverse". No? How about Mastodon? If not that, surely you have heard of gab.com

Gab.com was rebuilt a year or so ago upon Mastodon's open source software system which is a microblogging web platform. It is a branded part of the Fediverse, which is a series of microblogging sites that are all able and potentially willing (via a human decision and a configuration) to talk to each other. Effectively, this is fail-safe distributed computing with social microblogging (Twitter clones) as the form of application. The objective with these frameworks was to establish a censorship-proof system of web hosting, so that people who wanted to enjoy microblogging topics that were considered outside of generally socially acceptable could always have a place to go on the Internet, because the microblogging hosts are effectively proxied out by many servers. One might say it is the dark web originating on the open, public web/Internet. 

The Fediverse was also created largely by the alt-left. These are the bizarre perverts of the world, such as nudists, or furries, people who fantasize bestiality typically via hand-drawn anime art or cosplay. Until recently, sexual deviants were considered too far strayed from the norms of society to be allowed a place on the Internet, so censorship was frequent, which is a key motivator of the Fediverse. However, gab.com, which also struggles with a history of censorship, facilitates thoughts and content from people from all walks of life, including some people on the extreme alt-right--Neo-Nazis and the like. So gab.com forked Mastodon to recreate the gab.com social network on the Mastodon framework, replacing, adding, and removing features according to Gab's whim's and needs. Needless to say, the Masdodon community proved furious about this, that alt-right zealots are allowed to have an uncensored platform, but they made this bed. 

Meanwhile I personally have been very studiously monitoring social networking technologies, companies, and trends since the beginning. I was a participant from the beginning. Like, the very, very beginning. GeoCities was the socially driven personal expression platform of choice at the beginning. My first web site was on GeoCities, until I started hosting my own domain. Then came MySpace and blogging. Everyone had "a MySpace". Then, from out of nowhere, Facebook took off, and the personalized rebranding of an individual's personal profile on MySpace was deemed exotically ugly and forever forgotten in favor of the standardized look and feel of Facebook's fixed layout and feel. All the while, Facebook's mixing of "wall" posts from various users--friends and followed feeds--was something new and amazing. It filled the gap lost by the antiquation of NNTP newsgroups and message boards. It completely changed the landscape of Internet social networking. Then came Twitter and YouTube. With YouTube, sharing your life on video became not just possible, it became a normal way for people to experience pseudo-relationships of mindsharing, complete with facial expressions and body language, with the connected world around them. The people of the world became truly cohesive, the Internet was the glue.

And then people who had traditionally held power over people's minds began to panic. Facebook and Google and the like saw their roles in the world as guidance mechanisms for swaying world opinion, and mainstream media (MSM) began to have their published opinions (*cough* .. "news") prioritized as curated "Trends". YouTube stopped allowing the popularity of everyday people to reveal their priority in searches and top-level browsing, and instead now prioritizes packaged infotainment like CNN as the primary resources to be found for any given topic. And Twitter, Facebook, YouTube, Instagram, and so on, these are all perpetually being mentioned in the (alternative) news headlines for censoring people for sharing thoughts and opinions that go against the preconceived narratives that mainstream media and big tech employees (most of which are in coastal-state or otherwise high population cities, I might add, notably San Francisco, Portland, Los Angeles, and New York City) are trying to guide the world to embrace.

This is unacceptable, but at the same time expected. As a conservative Christian, I must admit that we had "quite a ride" of privileged authority in driving society for so many decades, centuries, yet we were told by Jesus to expect that the world would hate us, that we would be the underdog. That hasn't happened in our Western society, not really, not in my lifetime. So I can't be angry that conservatism has started to move towards the back seat. Nor should I suggest we are somehow "victims". We're not. The victims are not in the West. The victims are in China, in Pakistan, in India, all over the place, other than the West. The West's time is coming. It isn't here yet, despite "progress" in multiculturalism efforts. But it's coming. This has been instilled in me since pre-Matrix anticipation even in the music I listened to as a youth.

So my interest is in watching for, supporting, and participating in censor-proof platforms, "for everyone" but including conservative Christians. This is why I've invested in Gab. I've paid for a lifetime pro membership with Gab. But I've only just gotten started.

Gab.com made their software (their fork of Mastodon) open source and published it at https://code.gab.com/. They invite the Internet community to clone it and set up Gab Social instances all over the Internet. Presumably people have. So, I tried to. I can't. Trying on both my home workstation and on my laptop, both which run on Windows 10, I am stuck with local setup failure, starting the error that webpack-dev-server never actually got itself up. Gab Social's repository doesn't seem to have a support group, there doesn't seem to be a Gab Social group on gab.com, and no one replied to my post on gab, so I'm left in the dark here, and with this I'm also seeing some serious community-related limitations of the platform in the first place.

My motives to run Gab Social and work on its code myself were indeterminate, it's not truly open source for the gab.com platform so much as disclosed source, but at minimum I wanted to see what it consisted of. But even though I can't get it up and running, and be able to adapt it to my needs and interests--without, say, who knows, maybe somehow $buying an audience with someone at gab?--I can at least see on the surface what indeed it consists of.

The Gab Social software stack consists of:

  • PostgreSQL,
  • Ruby on Rails,
  • Node.JS, 
  • Redis, and
  • ReactJS,
  • with deployments on Docker
Knowing that the software is based on Mastodon, which is a branded implementation player on the Fediverse, this means that it runs on one or more of these protocols:
 
Knowing the protocol(s) used for Fediverse participating software is important because it means that I as a Microsoft stack .NET developer might be able to build a microblogging-oriented social network infrastructure that is compatible with Gab Social, and perhaps integrate with it (at least to the extent of it being on the same Fediverse), without actually utilizing any of its RoR/NodeJS/etc implementation, not that I'm fundamentally opposed to RoR/NodeJS/etc.
 
I will be following up with a Part 2 of this blog entry when I have done more homework on these and have come to a better understanding of what's been going on here. Sadly, all of the bullet points in the last bullet points list above are foreign to me, which means I haven't been monitoring social networking tech as closely as I should have, which is particulalry shameful considering I have a number of false starts of social network platform framework side projects over the years, albeit those efforts effectively stopped right around the time these seem to have begun. 

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , ,

Blog | General Technology | Software Development | Web Development | Microblogging

Prelude to a Blogging Reset

by Jon Davis 19. October 2019 22:55

I started blogging in around 2002/2003 when Blogger and Radio UserLand were inspirations for me to create a desktop blogging app I called PowerBlog. 

Archive.org: https://web.archive.org/web/20031010203956/http://powerblog.net/
PowerBlog (beta release)

My motivations were three:

  1. To grow my skill set as a software developer.
  2. To see if I could maybe make a living building and selling home-grown software. (PowerBlog was available for sale, but didn't sell.)
  3. To write, and write, and write. My brain was churning, and I wanted to divulge my thoughts out loud. 

PowerBlog was written in VB6. It consisted of an instance of the Internet Explorer WebBrowser COM object to drive a true WYSIWYG experience, design philosophies similar to those of Outlook Express (an e-mail and newsgroup client), and a publish mechanism component model including support for user-defined publishing scripts using the Active Scripting (VBScript/JScript) runtime component. When .NET v1.1 came out, I rewrote PowerBlog, reusing the same components from Microsoft but adding more features like XML-RPC--by which I built my own client/server libraries with self-documenting HTML output similar to ASP.NET's .asmx behavior--and style synchronization so that the user could edit in the same CSS style as the page itself. I tried to sell PowerBlog for $10 to $20 per download. It didn't sell. That's kind of okay, the value of my efforts was found in my skills ramp-up and in my having a tool for my own writing.

PowerBlog got Microsoft's attention. Three things happened, all of a sudden, with Microsoft: 1) the Windows Live initiative kicked off blogging from their web site (which they've since shuttered), 2) Microsoft interviewed me, and 3) Windows Live Writer was written, stealing most of the good ideas I had in PowerBlog (admittedly tidbits of the ideas of PowerBlog were stolen from another app called W.bloggar). I got nothin for it but some bragging rights for the ideas. 

I was broke. My motivators to go out on my own (taxes-related need to go offline to do some research) subsided. So I stopped development and went back out into the workforce. When .NET 2.0 came out along with a new version of Internet Explorer, it broke PowerBlog permanently. Microsoft published WebBrowser components that directly collided with the WebBrowser components I had created for PowerBlog. So PowerBlog isn't available today--it will just crash for you if you try--and I couldn't even build my own source code anymore, without finding an old Windows XP box with an old version of Visual Studio and .NET Framework 1.1\. It just wasn't worth the hassle, so I never bothered. Maintenance was immediately halted. PowerBlog.net went offline. I didn't much care. I did need to keep blogging and writing, so eventually I found BlogEngine.net and transferred my technical blog content to it. I updated the blog engine code once or twice as it evolved, but eventually I just let it sit dormant.

So that's where my tech blog instance stands today. http://www.jondavis.net/techblog/ runs on a decade old version of BlogEngine.net which I've tailored a bit to include banned IPs and some home-grown CAPTCHA functionality for the comments. I've since looked at a few other blogging engines over the years I've considered adopting, most notably I remember looking at NBlog, which gave me the most control but required me to implement it in code and didn't sufficiently demonstrate black box modularity, and Ghost, which would grow me in NodeJS but ticked me off in its stance on only using Markdown and never WYSIWYG HTML editing. 

Meanwhile, the whole time, developers and non-developers alike have pooh-poohed the interestingness of blogs, blogging software, blogging strategies, blogging services, etc. I can certainly see why. At its most basic level, blogs are little more than single-table CRUD apps. You have an index of recent blogs, you have a detail view of an individual blog post, you have an editor and creator page, and you can delete a blog post. The content of a blog post is, to the casual observer, nothing but four components: title, body, author, and publish date. If that's what you think a blog is, you're naive, and likely didn't bother reading this far.

At the very least, on the technical side, blogging brought about a few innovations that revolutionized the Internet, including: 

  • XML-RPC (SOAP's midget predecessor)
  • RSS
  • Publish pings
  • Atom
  • OPML
Not to mention less specific innovations, like end user accessible management of slugs (URL paths) for enhancing SEO. And the whole notion of blogging became the basis of the World Wide Web's notion of community interconnectivity. The broad set of Web sites was social networking, before social networks took over social networking. Blog sites were known, and linked to each other. Followers of blogs created view counts that filled the hole that Facebook post likes fill today. 
 
This is perhaps why dev.to is such a success story; it's like a Facebook group, for coders, where every single post is a blog article.
 
Anyway, so now I sit here typing into the ten year old deployment of BlogEngine.net on my resurrected tech blog, wondering what my next steps should be. Well, I'm pretty sure I want to create my own blog engine again, and build on it. Yes, it's a two decades stale idea, but here are my motives:
 
  • My blog philosophies still differ somewhat from the offerings currently available (although not far at all from BlogEngine.net)
  • I want to keep growing my technical skill set (not just write about them). There are revised web and software standards I need to freshen up on, and apply my learnings. Creating a blog site has always been a good mechanism to sample and apply the most basic web tech skills.
  • I need anything I deploy to a technical blog to portray my own personal portfolio, including the site on which my writings are displayed
  • I still like the idea of taking something I myself created and putting it out on github as a complete package and perhaps managing a hosted instance of it, similar to WordPress.
All this said, I don't think I'll be doing a full-blown BlogEngine.net equivalence. It will be a simple thing, and from that I'll take some ideas to glean from and apply them to other software efforts I'll be working on. 
 
But a few things I will point out about the technical strategies and approaches I intend to implement:
  1. Rich front-end libs (React, Vue, Blazor, etc)?? While I might take, say, Ghost's approach in having some rich interactivity for the author's experience, the end user (reader) experience cannot be forced to utilize them. A blog page must be rich in HTML semantics and index well SEO-wise for the web at large, so the HTTP response payload cannot be Javascript stub placeholders for dynamic content.
  2. Componentization is more important than anything else. I do intend to iterate over a lot of the features this old instance of BlogEngine.net has as well as current blog engines, and figure out ways to spoonfeed those features without embedding them deeploy in interdependencies. Worst thing I could do is embed all the bells and whistles directly into ASP.NET Razor Pages themselves. I intend to apply decorator pattern principles, perhaps applying sidecar architecture for some components where appropriate. Right now I'm looking at using Identity Server 4, for example, as a sidecar architecture for author identity and privileges. Speaking of which,
  3. Everything must be readily run-anywhere, black-box-capable deployable as ad hoc code launched with self-hosting, Azure web app service (PaaS), Docker-contained, or VM (IaaS). I also intend to set up a hosted instance and offer limited blog hosting for free, extended features for a fee. In this sense, WordPress is a competing platform.
  4. "Extended features, like ..?"  Premium templates. Large storage hosting. And premium add-on features yet to be defined.
  5. The templates can be based on Razor, but "pages" cannot be tightly defined. If this platform grows up, it needs to be a more than a blog, it needs to be a mini-CMS (content management system).
  6. Social networking ecosystem is a mandatory consideration; full blogging must integrate with microblogging and multi-contributor cross-pollination including across disparate sites and systems; more on this later, but imagine trusted Twitter users liking Facebook posts.
Again, yes, I am unpacking a two decades old solution proposal to a Web 1.0 / 2.0 problem, but that problem never really went away, it just became less prominent. I really don't anticipate this blowing up to being much of anything. At a minimum, I need to convert all of my content here on this technical blog (at http://www.jondavis.net/techblog/) to a home-grown platform of my own making, even if after doing that I call it done and walk away.
 
Another motivator for getting that done is the fact that just writing this, I'm writing with very tiny text on a very high resolution screen, and you're probably squinting reading this, too, if you're reading it from my web site, and if you actually made it this far. Again, yes, I could simply just replace the template and maybe refresh the version of BlogEngine.net which I'm using. But, why do that, when I call myself a 23-years-plus experienced web development veteran and could just build my own? Am I really so useless in my aging as to be unable to build something of my own again? Not like a blog engine will get me a better job, duh, so much as the fact that using someone else's engine just plain looks bad on me. So, this is a lightweight test of mettle. I need to do this because I need to.
 
On a final note, I'll mention again what I'm focusing on first: "who am I"--authentication and authorization. I'm learning up on Identity Server 4. I'm still new to it, so this will take some time. Right now it looks like the black box version of my blog engine will come with an instance of Identity Server 4, based perhaps on ASP.NET Core Identity as the user store; developers can tweak it out to their heart's content. I'm still mulling over whether to just embed it into the blog app (underkill? too much in one?) or separate it out as a separate SSO-oriented web app (overkill? too many distributed parts for a mere blog?), but at this point I'll likely do the former for the black box download (source code included) and the latter for the hosted instance which I hope to set up and offer to people wordpress.com-style.

 

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Blog | General Technology | Web Development

Time to blow off some dust, toot my own horn, and hit the books

by Jon Davis 14. October 2019 23:59

Woah, Nelly!

Jon Davis's blog is back up! Last post was .. more than three years ago! Daaaang! How ya'll been?! Missed ya'll! Miss me?

I'm just posting a quick update here to let ya'll know that the gears in my tech world have started turning again, and to update ya'll on what's going on.

You may or may not have noticed that my root web site http://jondavis.net/ went through a couple phases and got stuck with that dorky "please wait" console-ish page. I was supposed to have replaced it by *cough* .. 2016. It's now approaching the year 2020. What happened? Well what happened was I got a few kicks in the pants. Got myself into some really humbling situations, not the fun kind. Perhaps you might say I got phased out. But I never left, so now I'm in high gear phasing back in.  I might try to explain ...

A few blog entries ago (this goes back to 2014) I wrote about how I needed to reset things, and how I needed to mull over what was going on at work. So here's a recap on my career path lately up till my previous job:

 

  1. In 2012 I was hired at Neudesic, a prominent consultancy firm. I dreamed of surrounding myself with smart people I could glean from. It turned out that things were pretty erratic at Neudesic. My first project assignment the client was Microsoft themselves. Prior to my joining they had just canned a project to build a big, beautiful Windows 8 themed web site where developers and other staff would post all-important articles and index them with Lucene.NET. Architecturally it was a disaster, and they canned the Microsoft executive at the same time they canned the project. So my joining the team, they were trying to salvage that project. I worked with them to try to dig into the code, get it up and running, figure out the performance issues, etc., and at some point I flew out to Redmond, Washington to discuss with the local Neudesic managers who were interfacing with Microsoft in person. Then I opened my mouth. I said, "Ya'll are really just makin' a blog. Why not add blog-ish things? Why reimplement Sharepoint?" Suddenly we were all fired. So then they shipped me off to Pulte Homes, where I did some work there, but the architect for that project and I didn't seem to know what we were doing on the user authorization detail and I proved blunt enough that they "couldn't afford" to keep me on after a couple months. So then they shipped me off to California to work with Ward & Brown on the Obamacare / ACA implementation there, again late to the party and annoyingly insistent to "help". But when QA and I asked around where the production/staging servers were, and they realized they apparently missed that part, they canned all their contractors. All 250 of us. So there I was stuck in the Neudesic-Phoenix office waiting for them to call me in. And they did. They laid me off. Like the child-man that I was, I accidentally let them catch me choking. They did say they might be able to take me back in a few months, though, so ...
  2. For the next few months I did some contract work for a local entrepeneur. I waited that out for a while and ultimately I wanted to go back to Neudesic, so ...
  3. In mid-year 2013 I asked Neudesic to take me back in. They did. They brought me back with open arms. It was more awkward for the lot of them than I or anyone anticipated. I'd made myself a reputation for being unrefined up to that point, I'm pretty sure. But anyway, when I arrived (a second time) I noticed that more people, including my former manager, were no longer there. They were laid off too. And I shipped back to Pulte Homes, on another, bigger project. I was up front with them, though, about what specific tools I was unacquainted with, specifically Bootstrap and the like, at the time. They shrugged that off, brought me in anyway. That project was led by--ima be frank here--a really, REALLY bitchy, control freakish, PMS-y woman. I tried to overlook it at the time. I tried to pretend it wasn't so at the time. I'm thinking back half a decade now, and I'm saying it like how I witnessed it. She was horrible. If I ever get in a situation like that again I'll run for the hills. That was awful. She was awful. But anyway, at the time, overlooking that (which I deliberately did), since the lead personality who'd previously been on that project but was laid off was now gone, and I wasn't getting much in the way of introduction, I deliberately, if bombastically, made a ton of assertions and at the same time asked a lot of really stupid, ignorant questions, which ticked off everyone on the team. So they canned me from the team. So that was fun. But what really did me in was I overheard the Neudesic chief developer director guy tell my boss that I "shouldn't be writing software". They assigned me some out-of-state ops role. I should have quit then and there; sure, I screwed up, but I'm a developer, not an ops guy. I didn't last long, and I ultimately did resign.
  4. So now in 2014 I got a very odd and delightfully educational gig with DeMark Analytics. So first of all, Tom DeMark, the owner, is a wealthy financial technician (a term I never knew before) who did what I imagined doing some years back--he studied financial charts and came up with artificially intelligent algorithms that predicted changes in the market. He's one among many people who come up with this stuff, but even so, that's what he did. So now they were building a product for people who could use these financial chart studies--a charting app, basically, with studies being overlaid over the chart. This entailed financial event data being siphoned in to both the back-end engine as well as to the client (the web browser). This is high tech stuff, and the pattern in use was CQRS-ES, which I'd never heard of much less mastered. So here's where it all broke down: 1) We all sat in close proximity to each other. No privacy. All conversations fully exposed. 2) I was hired as a lead, but I was wrong a couple times when I was adament, it was observed by the teammates, and so my authoritative experience as a lead developer was never trusted again. 3) Everyone in the company but maybe four of us was either family to Tom DeMark or close friends of Tom DeMark. It was a nepositic environment. I worked under one of Tom's sons (he was CEO/President), and another of Tom's sons worked under me--well, alongside me, since me being lead didn't work out. He was a bit spoiled. Quick on the Javascript, though, I couldn't keep up with his code, which was charting graphics. Like, at all. So I stopped trying. 4) My pay was adequate (best I can say about it) but my paycheck was cut monthly. Weird, bizarre, and painful. But it was worth it, I was investing in this, I figured. 5) My direct boss was a super high IQ ass. I asked technical questions about the middle/back end from him, like about how Cassandra was to be used, and instead of answering, he'd call everyone over--the CSS front-end developers, everyone. Then ask me to ask my question again. All to I guess humble me? .. Show me that this was "duh" knowledge that everyone would know and understand? So where this went critical was 6) I again used rhetoric. Crap, man. This pretty much got me fired. I used rhetoric! What I mean is, I insinuated, in so many words, that "your explanation sounds vague, and if I didn't know any better I'd say it almost sounds like you said [something obviously stupid]". Except I didn't say it like that, I actually, literally said, ".. you mean [something obviously stupid]?" I really, truly, genuinely trusted that he understood that I was being rhetorical, but this was the second time I made that mistake--I made the same mistake at Neudesic, "I can't read that [Javascript code]", I could, but I was getting at I shouldn't have to stop and read it--and with these guys it was sabotage. A few minutes later after we returned to our desks he threw his hands up and announced he was switching to Java/Scala. "Sorry Jon." Sorry, Jon? So yeah he was basically saying "you're fired, please quit." It took a couple more months, but I eventually did.
  5. So then I work at another consultancy firm. Solution Stream. Utah-based. They seemed to be trying to spread out into Phoenix. "I can do this," I thought. I learned at Best Software (Sage Software) when I moved to Arizona how consultants--consultants, not contractors--work, and bring prestige to the process of coming up with technical solutions and strategies, documenting them, and working them with the clients. But Solution Stream was really primarily just interested in creating contractors, apparently, but regardless, they didn't appreciate what I brought to the table, they undersold my capabilities, the executives decided they didn't like me, and they literally pulled me off a project that the client and my direct boss said I was doing great with and signed me over to Banner Health as a temp-to-hire (fire). I had no interest in being hired as a permanent Banner Health employee. When my temp-to-hire contract ended (the "temp-" part), everything ended. I swore off all consulting firms at that point. No more consulting firms. Never again.
  6. So then I got picked up at InEight. Bought out by major construction company Kiewit, InEight was building a cloud version of their Hard Dollar desktop app which manages large scale construction logistics (vendors, materials, supplies, etc). They had some workable plans and ideas. But things broke down real fast. Everyone who interviewed me quit within the year I joined, and it wasn't hard to see why. 1) They had non-technical people at the helm (senior leadership) making some very expensive and frustrating platform and architecture decisions. High performance software with minimal performance hosting, nothing worked, because they didn't want to spend the money for scaling the web tier. 2) The work was outsourced. Most of it was outsourced to India. Eventually they shipped some of it onshore to another midwestern state. But even onshore, most of their staff were H1-B visa holders. Foreigners. Nearly everyone I was working with was from India. Even after so many people quit, I stuck around as long as I could. But eventually I couldn't stand things anymore, my career path had become stagnant, and I knew I couldn't work with the senior executives (no one could, except people from India I guess). I was about to give two weeks notice, when those senior execs pulled me into a room and chewed me out for being "disrespectful". I was done. Never saw them or anyone over there again. (Actually, that's not entirely true; I have maintained strong friendship with at least one colleague from there. He got laid off a few months after I left. We're friends; we literally just met up this week.)
  7. For the last year and a half I've been working ... somewhere. For now. I came in as a lead developer, but they, too, openly declared me "disrespectful", so I've given up and just been a highly productive, proficient, heads-down programmer. This place, too, is mostly H1-B visa holders from India. I'm surrounded by foreigners. Scarcely a black, white, or Mexican face. It's depressing. I have less and less each year against Indians but my God, let me work with people of my own culture if I'm here in USA, just a few like-minded, like-raised friends, just a few? And now my team is getting dismantled, due to a third party taking over what we're doing. So I'm about to get laid off. If I'm not laid off, though, well, ... 1) as a contractor, I don't get paid holidays, I don't get paid vacations, and that was painful enough, but unexpectedly after my hire it turns out there is a mandatory two weeks unpaid leave during Christmas & New Year's, and that's unacceptable ($thousands of $dollars lost, not to mention depressing since I spend holidays alone, so yes, it's unacceptable). 2) I have been super comfortable, and super complacent, with little to gain in technical growth. It's been ASP.NET MVC with SQL Server and jQuery. And some .NET Core 2.2 and Razor Pages. Woo wee. *sigh* So yeah, I'm open to change, regardless of whether I get laid off.
There's my life story for the last seven years. Stupid, depressing, awful, I've been awful, I've let myself screw myself over time and time again. So here is my new strategy.

 
I have no one to blame, even where I've whined and complained, I have no one to blame for my life's frustrations but myself. It's part of the maturing process. I've embraced my learnings and I will carry on. I will try to let go of the past, but I only repeat and document them here because I have learned from them, and perhaps you can, too. What do I want in my career path? I miss the days when I was an innovator.

You guys remember AJAX? Yeah? I dreamed up AJAX in 1998 when IE4 came out. I called it "TelnetGUI". Stupid name. Other people came up with the same idea a few years later and earned the credit.

You guys remember Windows Live Writer? Yeah? ... Total rip-off of my PowerBlog app, down to detail. You could say I prototyped Windows Live Writer before Microsoft started working on Windows Live Writer. Microsoft even interviewed me after I built PowerBlog, because of PowerBlog and its Microsoft-minded inspirations of component integration. Jerk interviewer was like, "Wait, you mean you don't know C++?! Oh good grief, I thought you were a real programmer." Screw you, Microsoft interviewer. LOL. Anyway, Windows Live Writer came out a couple years later. Took all of PowerBlog's fundamental ideas, even down to the gleaning the CSS theme and injecting the theme into the editor.
 
You guys remember PowerShell? Yeah? I prototyped the idea in or around 2004. I took the ActiveScript COM object, put it in a C++ console container, spoonfed some commands where you could new-up some objects and work with them in a command-line shell, suggested that the sky's the limit if you integrate full-blown .NET CLR and shell commands in this, and showed it to the world on Microsoft's newsgroups. Microsoft was watching; I planted a seed. A year or so later, PowerShell ("Monad") was previewed to the world. I didn't do the dirty work of development of it, but I seeded an idea.
 
You guys remember jQuery UI? Yeah? I cobbled together a windowing plugin for jQuery a year or two before jQuery UI was released. It was called jqDialogForms. Pretty nifty, I thought, but heck, I never got to use it in production.  
 
In fact there's a lot of crap in my attic I recently dug out and up over at https://github.com/stimpy77/ancient-legacy (It really is crap. Nothing much to see.)
 
And, oh yeah, you guys remember Entity Framework, Magical Unicorn Edition? I, too, had been inspired by Fluent NHibernate, and I, too, was working on an ORM library I called Gemli [src]. Sadly, I ended up with a recursion nightmare I myself stopped being able to read, development slowed to a halt, and then suddenly Microsoft announced that EF Magical Unicorn Edition, and I observed that it did everything I was trying to do in Gemli plus 99x more. So that was a waste of time. Even so, that was mini-ORM-of-my-own-making #2 or #3. 
 
All of these micro-innovations and others are years old, created during times of passion and egotistical self-perception of brilliance. What happened?! I think we can all see what happened. My ego kept bulldozing my career. My social ineptitude vanquished my opportunities. And I got really, really lazy on the tech side.
 
My blog grew stagnant because, frankly, career errors aside, my bold and lengthy philisophical assertions in my blog articles were pretty wrong. Philosophies like, "design top down, implement bottom up". Says who? Why? I dunno. Seemed like an interesting case to make at the time. But then people at meetups said they knew my name, read my blog, quoted my article, and I curled up and squealed and said "oh gawd I had no idea what I was writing". (Actually I just nodded my head with a smile and blushed.) 
 
For the last few weeks I have spent, including study time, more than 70 hours a week, working. Working on hard skills growth. Working on side project development--brainstorming, planning. Working on fixing patchy things, like getting this blog up, so I can get into writing again. It's overdue for a replacement, but frankly I might just switch over to http://dev.to/ like all the young cool kids. 
 
Tech Things I Am Paying Attention To 
 
.NET Core 3 is where it's at, .NET 5 is going to be The Great .NET Redux's great arrival. However, the JVM has had a huge comeback over the last half-decade, and NodeJS and npm like squirrelly cats have been sticking their noses in everything. Big client-side Javascript libraries from a year or two ago (Facebook's React, Google's Angular, China's Vue) are now server-side for some dumb reason. Most importantly, software is becoming event-driven. IaaS is gone. PaaS is passe. Kubernetes is now standard, apparently. Microsoft's MSMQ is so 1990s, RabbitMQ so 00's, LinkedIn's Kafka is apparently where it's at, and now Yahoo!'s Pulsar is gaining noteriety for being even more performant. 
 
My day job being standard transactional web dev with ASP.NET/jQuery/SQL has made me bewilderingly ansy. If I want to continue to be competitive in complex software architecture and software development I've got to really go knee deep--no, neck deep--in React/Angular/Vue on the front-end, MongoDB, Hadoop, etc on data, Docker/Kubernetes on the platform, Kafka on the data transfer, CQRS+ES on the transaction cycles, DDD as the foundation to argue for it all, and books to explain it all. I need to go to college, and if I don't have time or money for that I need to be studying and reading and challenging myself at all hours I am free until I am confident as a resource for any of these roles.
 
Enough of the crap reputation of being a wannabe. Let's be. 

 

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

C# | Career | General Technology | Health and Wellness | Open Source | Opinion | Pet Projects | Social Networking | Software Development | Unresolvable

I Don’t Much Get Go

by Jon Davis 24. August 2010 01:53

When Google announced their new Go programming language, I was quite excited and happy. Yay, another language to fix all the world’s problems! No more suckage! Suckage sucks! Give me a good language that doesn’t suffer suckage, so that my daily routine can suck less!

And Google certainly presented Go as “a C++ fixxer-upper”. I just watched this video of Rob Pike describing the objectives of Go, and I can definitely say that Google’s mantra still lines up. The video demonstrates a lot of evils of C++. Despite some attempts at self-edumacation, I personally cannot read nor write real-world C++ because I get lost in the gobbligook that he demonstrated.

But here’s my comment on YouTube:

100% of the examples of "why C++ and Java suck" are C++. Java's not anywhere near that bad. Furthermore, The ECMA open standard language known as C#--brought about by a big, pushy company no different in this space than Google--already had the exact same objectives as Java and now Go had, and it has actually been fundamentally evolving at the core, such as to replace patterns with language features (as seen in lambdas and extension methods). Google just wanted to be INDEPENDENT, typical anti-MS.

While trying to take Go in with great excitement, I’ve been forced to conclude that Google’s own message delivery sucks, mainly by completely ignoring some of the successful languages in the industry—namely C# (as well as some of the lesser-used excellent languages out there)—much like Microsoft’s message delivery of C#’s core objectives somewhat sucked by refraining from mentioning Java even once when C# was announced (I spent an hour looking for the C# announcement white paper from 2000/2001 to back up this memory but I can’t find it). The video linked above doesn’t even show a single example of Java suckage; it just made these painful accusations of Java being right in there with C++ as being a crappy language to work with. I haven’t coded in Java in about a decade, honestly, but back then Java was the shiznat and code was beautiful, elegant, and more or less easy to work with.

Meanwhile, Java has been evolving in some strange ways and ultimately I find it far less appetizing than C#. But where is Google’s nod to C#? Oh that’s right, C# doesn’t exist, it’s a fragment of someone’s imagination because Google considers Microsoft (C#’s maintainer) a competitor, duh. This is an attitude that should make anyone automatically skeptical of the language creator’s true intentions, and therefore of the language itself. C# actually came about in much the same way as Go did as far as trying to “fix” C++. In fact, most of the problems Go describes of C++ were the focus of C#’s objectives, along with a few thousand other objectives. Amazingly, C# has met most of its objectives so far.

If we break down Google’s objectives themselves, we don’t see a lot of meat. What we find, rather, are Google employees trying to optimize their coding workflow for previously C++ development efforts using perhaps emacs or vi (Rob even listed IDEs as a failure in modern languages). Their requirements in Go actually appear to be rather trivial. It seems that they want to write quick-and-easy C-syntax-like code that doesn’t get in the way of their business objectives, that performs very fast, and fast compilation that lets them escape out of vi to invoke gcc or whatever compiler very quickly and go back to coding. These are certainly great nice-to-haves, but I’m pretty sure that’s about it.

Consider, in contrast, .NET’s objectives a decade ago, .NET being at the core of applied C# as C# runs on the CLR (the .NET runtime):

  • To provide a very high degree of language interoperability
    • Visual Basic and C++ and Java, oh my! How do we get them to talk to each other with high performance?
    • COM was difficult to swallow. It didn’t suck because its intentions were gorgeous—to have a language-netural marshalling paradigm between runtimes—but then the same objectives were found in CORBA, and that sucked.
    • Go doesn’t even have language interoperability. It has C (and only C) function invocators. Bleh! Google is not in the real world!
  • To provide a runtime environment that completely manages code execution
    • This in itself was not a feature, it was a liability. But it enabled a great deal, namely consolidating QA resources for low-level functionality, which in turn brought about instantaneous quality and productivity on Microsoft’s part across the many languages and the tools because fewer resources had to focus on duplicate details.
    • The Mono runtime can run a lot of languages now. It is slower than C++, but not by a significant level. A C# application, fully ngen’d (precompiled to machine-level code), will execute at roughly 90-95% of C++’s and thus theoretically Go’s performance, which frankly is pretty darn good.
  • To provide a very simple software deployment and versioning model
    • A real-world requirement which Google in its corporate and web sandboxes is oblivious to, I’m not sure that Go even has a versioning model
  • To provide high-level code security through code access security and strong type checking
    • Again, a real-world requirement which Google in its corporate and web sandboxes is oblivious to, since most of their code is only exposed to the public via HTML/REST/JSON/SOAP.
  • To provide a consistent object-oriented programming model
    • It appears that Go is not an OOP language. There is no class support in Go. No objects at all, really. Just primitives, arrays, and structs. Surpriiiiise!! :D
  • To facilitate application communication by using industry standards such as SOAP and XML.
  • To simplify Web application development
    • I really don’t see Google innovating here, instead they push Python and Java on their app cloud? I most definitely don’t see this applying to Go at all.
  • To support hardware independence and portability
    • Although the implementation of this (JIT) is a liability, the objective is sound. Old-skool Linux folks didn’t get this; it’s stupid to have to recompile an application’s distribution, software should be precompiled.
    • Java and .NET are on near-equal ground here. When Java originally came about, it was the silver bullet for “Write Once, Run Anywhere”. With the successful creation and widespread adoption of the Mono runtime, .NET has the same portability. Go, however, requires recompilation. Once again, Google is not out in the real world, they live in a box (their headquarters and their exposed web).

And with the goals of C#,

  • C# language is intended to be a simple, modern, general-purpose, object-oriented programming language.
    • Go: “OOP is cruft.”
  • The language, and implementations thereof, should provide support for software engineering principles such as strong type checking, array bounds checking, detection of attempts to use uninitialized variables, and automatic garbage collection. Software robustness, durability, and programmer productivity are important.
    • Go: “Um, check, maybe. Especially productivity. Productivity means clean code.”
    • (As I always say, the more you know, the more you realize how little you know. Clearly you think you’ve got it all down, little Go.)
  • The language is intended for use in developing software components suitable for deployment in distributed environments.
    • Go: “Yeah we definitely want that. We’re Google.”
  • Source code portability is very important, as is programmer portability, especially for those programmers already familiar with C and C++.
    • Go: “Just forget C++. It’s bad. But the core syntax (curly braces) is much the same, so ... check!”
  • Support for internationalization is very important.
  • C# is intended to be suitable for writing applications for both hosted and embedded systems, ranging from the very large that use sophisticated operating systems, down to the very small having dedicated functions.
    • Go: “Check!”
    • (Yeah, except that Go isn’t an applications platform. At all. So, no. Uncheck that.)
  • Although C# applications are intended to be economical with regard to memory and processing power requirements, the language was not intended to compete directly on performance and size with C or assembly language.

Right now, Go just looks like a syntax with a few basic support classes for I/O and such. I must confess I was somewhat unimpressed by what I saw at Go’s web site (http://golang.org/) because the language does not look like much of a readability / maintainability improvement to what Java and C# offered up.

  • Go supposedly offers up memory management, but still heavily uses pointers. (C# supports pointers, too, by the way, but since pointers are not safe you must declare your code as “containing unsafe code”. Most C# code strictly uses type-checked references.)
  • Go eliminates semicolons as statement terminators. “…they are inserted automatically at the end of every line that looks like the end of a statement…” Sorry, but semicolons did not make C++ unreadable or unmaintainable
    Personally I think code without punctuation (semicolons) looks like English grammar without punctuations (no period)
    You end up with what look like run-on sentences
    Of course they’re not run-on sentences, they’re just lazily written ones with poor grammar
    wat next, lolcode?
  • “{Sample tutorial code} There is no implicit this and the receiver variable must be used to access members of the structure.” Wait, what, what? Hey, I have an idea, let’s make all functions everywhere static!
  • Actually, as far as I can tell, Go doesn’t have class support at all. It just has primitives, arrays, and structs.
  • Go uses the := operator syntax rather than the = operator for assignment. I suppose this would help eliminate the issue where people would type = where they meant to type == and destroy their variables.
  • Go has a nice “defer” statement that is akin to C#’s using() {} and try...finally blocks. It allows you to be lazy and disorganized such that late-executed code that should called after immediate code doesn’t require putting it below immediate code, we can just sprinkle late-executed code in as we go. We really needed that. (Except, not.) I think defer’s practical applicability is for some really lightweight AOP (Aspect Oriented Programming) scenarios, except that defer is a horrible approach to it.
  • Go has both new() and make(). I feel like I’m learning C++ again. It’s about those pesky pointers ...
    • Seriously, how the heck is
       
        var p *[]int = new([]int) // allocates slice structure; *p == nil; rarely useful
        var v  []int = make([]int, 100) // the slice v now refers to a new array of 100 ints

       
      .. a better solution to “improving upon” C++ with a new language than, oh I don’t know ..
       
        int[] p = null; // declares an array variable; p is null; rarely useful
        var v = new int[100]; // the variable v now refers to a new array of 100 ints

       
      ..? I’m sure I’m missing something here, particularly since I don’t understand what a “slice” is, but I suspect I shouldn’t care. Oh, nevermind, I see now that it “is a three-item descriptor containing a pointer to the data (inside an array), the length, and the capacity; until those items are initialized, the slice is nil.” Great. More pointer gobbligook. C# offers richly defined System.Array and all this stuff is transparent to the coder who really doesn’t need to know that there are pointers, somewhere, associated with the reference to your array, isn’t that the way it all should be? Is it really necessary to have a completely different semantic (new() vs. make())? Ohh yeah. The frickin pointer vs. the reference.
  • I see Go has a fmt.Printf(), plus a fmt.Fprintf(), plus a fmt.Sprintf(), plus Print() plus Println(). I’m beginning to wonder if function overloading is missing in Go. I think it is; http://golang.org/search?q=overloading
  • Go has “goroutines”. It’s basically, “go func() { /* do stuff */ }” and it will execute the code as a function on the fly, in parallel. In C# we call these anonymous delegates, and delegates can be passed along to worker thread pool threads on the fly with only one line of code, so yes, it’s supported. F# (a young .NET sibling of C#) has this, too, by the way, and its support for inline anonymous delegate declarations and spawning them off in parallel is as good as Go’s.
  • Go has channels for communication purposes. C# has WCF for this which is frankly a mess. The closest you can get to Go on the CLR as far as channels go is Axum, which is variation of C# with rich channel support.
  • Go does not throw exceptions. It panics, from which it might recover.

While I greatly respect the contributions Google has made to computing science, and their experience in building web-scalable applications (that, frankly, typically suck at a design level when they aren’t tied to the genius search algorithms), and I have no doubt that Google is an experienced web application software developer with a lot of history, honestly I think they are clueless when it comes to real-world applications programming solutions. Microsoft has been demonized the world over since its beginnings, but one thing they and few others have is some serious, serious real-world experience with applications. Between all of the web sites and databases and desktop applications combined everywhere on planet Earth through the history of man, Microsoft has probably been responsible for the core applications plumbing for the majority of it all, followed perhaps by Oracle. (Perhaps *nix and applications and services that run on it has been the majority; if nothing else, Microsoft has most certainly still had the lead in software as a company, to which my point is targeted.)

It wasn’t my intention to make this a Google vs. Microsoft debate, but frankly the fact that Go presentations neglect C# severely causes question to Go’s trustworthiness.

In my opinion, a better approach to what Google was trying to do with Go would be to take a popular language, such as C#, F#, or Axum, and break it away from the language’s implementation libraries, i.e. the .NET platform’s BCL, replacing them with the simpler constructs, support code, and lightweight command-line tooling found in Go, and then wrap the compiler to force it to natively compile to machine code (ngen). Honestly, I think that would be both a) a much better language and runtime than Go because it would offer most of the benefits of Go but in a manner that retains most or all of the advantages of the selected runtime (i.e. the CLR’s and C#’s multitude of advantages over C/C++), but also b) a flop, and a waste of time, because C# is not really broken. Coupled with F#, et al, our needs are quite well met. So thanks anyway, Google, but, really, you should go now.

Currently rated 3.2 by 6 people

  • Currently 3.166667/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , ,

C# | F# | General Technology | Mono | Opinion | Software Development

Nail-biting success with Windows 7 Media Center and CableCard

by Jon Davis 22. June 2009 01:34

I got CableCard working today with Windows 7 Media Center. This is my first CableCard install. The install was not smooth, but it was successful. I'd heard a lot of horror stories about CableCard, but most of these stories were from two years or so ago. I expected the whole matter to be cleaned up by now. It has probably improved a lot, but I was surprised by how bumpy the ride was.

...

COX technicians really, genuinely, deeply hate CableCards. This was the first visit to install CableCard, but the second visit from COX in the last couple weeks where the subject of CableCards came up. These guys acknowledge that the idea behind CableCard is a sound one, but the problem is that they are so different, and the devices are so different, that it's really hard to get a successful install. [I think Microsoft can relate to this sort of experience, being that their software works with most any white box PC on the planet.] Today's installer had a nauseous-looking frown on his face consistently from the moment he climbed the outer stairs to my door until the minute he walked out the door.. although, he had a slight skip in his step when he left. I'm not sure if that's because he was glad the torture was over or if it was because it wasn't another failure.

More at: http://thegreenbutton.com/forums/thread/370299.aspx

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

Electronics | General Technology | Pet Projects

Nine Reasons Why 8GB Is Only Just Enough (For A Professional Business Software Developer)

by Jon Davis 13. February 2009 21:29

Today I installed 8GB on my home workstation/playstation. I had 8GB lying around already from a voluntary purchase for a prior workplace (I took my RAM back and put the work-provided RAM back in before I left that job) but the brand of RAM didn’t work correctly on my home PC’s motherboard. It’s all good now though, some high quality performance RAM from OCZ and my Windows 7 system self-rating on the RAM I/O jumped from 5.9 to 7.2.

At my new job I had to request a RAM upgrade from 2GB to 4GB. (Since it’s 32-bit XP I couldn’t go any higher.) I asked about when 64-bit Windows Vista or Windows 7 would be put on the table for consideration as an option for employees, I was told “there are no plans for 64-bit”.

The same thing happened with my last short-term gig. Good God, corporate IT folks everywhere are stuck in the year 2002. I can barely function at 4GB, can’t function much at all at 2GB.

By quadrupling the performance of your employee's system, you’d effectively double the productivity of your employee; it’s like getting a new employee for free.

If you are a multi-role developer and aren’t already saturating at least 4GB of RAM you are throwing away your employer’s money, and if you are IT and not providing at least 4GB RAM to developers and actively working on adding corporate support for 64-bit for employees’ workstations you are costing the company a ton of money due to productivity loss!! I don’t know how many times I’ve seen people restart their computers or sit and wait for 2 minutes for Visual Studio to come up because their machine is bogged down on a swap file. That was “typical” half a decade ago, but it’s not acceptable anymore. The same is true of hard drive space. Fast 1 Terabyte hard drives are available for less than $100 these days, there is simply no excuse. For any employee who makes more than X (say, $25,000), for Pete’s sake, throw in an extra $1000-$2000 or so more and get the employee two large (24-inch) monitors, at least 1TB hard drive(s) (ideally 4 drives in a RAID-0+1 array), 64-bit Windows Server 2008 / Windows Vista / Windows 7, a quad-core CPU, and 8 GB of some high performance (800+ GHz) RAM. It’s not that that’s another $2,000 or so to lose; it’s that just $2,000 will save you many thousands more dough. By quadrupling the performance of your employee's system, you’d effectively double the productivity of your employee; it’s like getting a new employee for free. And if you are the employee, making double of X (say, more than $50,000), and if your employer could somehow allow it (and they should, shame on them if they don’t and they won’t do it themselves), you should go out and get your own hardware upgrades. Make yourself twice as productive, and earn your pay with pride.

In a business environment, whether one is paid by the hour or salaried (already expected to work X hours a week, which is effectively loosely translated to hourly anyway), time = money. Period. This is not about developers enjoying a luxury, it’s about them saving time and employers saving money.

Note to the morons who argue “this is why developers are writing big, bloated software that suck up resources” .. Dear moron, this post is from the perspective of an actual developer’s workstation, not a mere bit-twiddling programmer—a developer, that is, who wears many hats and must not just write code but manage database details, work with project plans, document technical details, electronically collaborate with teammates, test and debug, etc., all in one sitting. Nothing in here actually recommends or even contributes to writing big, bloated software for an end user. The objective is productivity, your skills as a programmer are a separate concern. If you are producing bad, bloated code, the quality of the machine on which you wrote the code has little to nothing to contribute to that—on the contrary, a poor developer system can lead to extremely shoddy code because the time and patience required just to manage to refactor and re-test become such a huge burden. If you really want to test your code on a limited machine, you can rig VMWare / VirtualPC / VirtualBox to temporarily run with lesser RAM, etc. You shouldn’t have to punish yourself with poor productivity while you are creating the output. Such punishment is far more monetarily expensive than the cost of RAM.

I can think of a lot of reasons for 8+ GB RAM, but I’ll name a handful that matter most to me.

  1. Windows XP / Server 2003 alone takes up half a gigabyte of RAM (Vista / Server 2008 takes up double that). Scheduled tasks and other processes cause the OS to peak out at some 50+% more. Cost: 512-850MB. Subtotal @nominal: ~512MB; @peak: 850MB
  2. IIS isn’t a huge hog but it’s a big system service with a lot of responsibility. Cost: 50-150. Subtotal @nominal: ~550MB; @peak 1GB.
  3. Microsoft Office and other productivity applications should need to be used more than one at a time, as needed. For more than two decades, modern computers have supported a marvelous feature called multi-tasking. This means that if you have Outlook open, and you double-click a Microsoft Word attachment, and upon reading it you realize that you need to update your Excel spreadsheet, which in your train of thought you find yourself updating an Access database, and then you realize that these updates result in a change of product features so you need to reflect these details in your PowerPoint presentation, you should have been able to open each of these applications without missing a beat, and by the time you’re done you should be able to close all these apps in no more than one passing second per click of the [X] close button of each app. Each of these apps takes up as much as 100MB of RAM, Outlook typically even more, and Outlook is typically always open. Cost: 150-1GB. Subtotal @nominal: 700MB; @peak 2GB.
  4. Every business software developer should have his own copy of SQL Server Developer Edition. Every instance of SQL Server Developer Edition takes up a good 25MB to 150MB of RAM just for the core services, multiplied by each of the support services. Meanwhile, Visual Studio 2008 Pro and Team Edition come with SQL Server 2005 Express Edition, not 2008, so for some of us that means two installations of SQL Server Express. Both SQL Server Developer Edition and SQL Server Express Edition are ideal to have on the same machine since Express doesn’t have all the features of Developer and Developer doesn’t have the flat-file support that is available in Express. SQL Server sitting idly costs a LOT of CPU, so quad core is quite ideal. Cost: @nominal: 150MB, @peak 512MB. Subtotal @nominal: 850MB; @peak: 2.5GB. We haven’t even hit Visual Studio yet.
  5. Except in actual Database projects (not to be confused with code projects that happen to have database support), any serious developer would use SQL Server Management Studio, not Visual Studio, to access database data and to work with T-SQL tasks. This would be run alongside Visual Studio, but nonetheless as a separate application. Cost: 250MB. Subtotal @nominal: 1.1GB; @peak: 2.75GB.
  6. Visual Studio itself takes the cake. With ReSharper and other popular add-ins like PowerCommands installed, Visual Studio just started up takes up half a gig of RAM per instance. Add another 250MB for a typical medium-size solution. And if you, like me lately, work in multiple branches and find yourself having to edit several branches for different reasons, one shouldn’t have to close out of Visual Studio to open the next branch. That’s productivity thrown away. This week I was working with three branches; that’s 3 instances. Sample scenario: I’m coding away on my sandbox branch, then a bug ticket comes in and I have to edit the QA/production branch in an isolated instance of Visual Studio for a quick fix, then I get an IM from someone requesting an immediate resolution to something in the developer branch. Lucky I didn’t open a fourth instance. Eventually I can close the latter two instances down and continue with my sandbox environment. Case in point: Visual Studio costs a LOT of RAM. Cost @nominal 512MB, @peak 2.25GB. Subtotal @nominal: 1.6GB; @peak: 5GB.
  7. Your app being developed takes up RAM. This could be any amount, but don’t forget that Visual Studio instantiates independent web servers and loads up bloated binaries for debugging. If there are lots of services and support apps involved, they all stack up fast. Cost @nominal: 50MB, @peak 750MB. Subtotal @nominal: 1.65GB; @peak: 5.75GB.
  8. Internet Explorer and/or your other web browsers take up plenty of RAM. Typically 75MB for IE to be loaded, plus 10-15MB per page/tab. And if you’re anything like me, you’ll have lots and lots and LOTS of pages/tabs by the end of the day; by noon I typically end up with about four or five separate IE windows/processes, each with 5-15 tabs. (Mind you, all or at least most of them are work-related windows, such as looking up internal/corporate documents on the intranet or tracking down developer documentation such as API specs, blogs, and forum posts.) Cost @nominal: 100MB; @peak: 512MB. Subtotal @nominal: 1.75GB; @peak: 6.5GB.
  9. No software solution should go untested on as many platforms as is going to be used in production. If it’s a web site, it should be tested on IE 6, IE 7, and IE 8, as well as current version of Opera, Safari 3+, Firefox 1.5, Firefox 2, and Firefox 3+. If it’s a desktop app, it should be tested on every compatible version of the OS. If it’s a cross-platform compiled app, it should be tested on Windows, Mac, and Linux. You could have an isolated set of computers and/or QA staff to look into all these scenarios, but when it comes to company time and productivity, the developer should test first, and he should test right on his own computer. He should not have to shutdown to dual-boot. He should be using VMWare (or Virtual PC, or VirtualBox, etc). Each VMWare instance takes up the RAM and CPU of a normal system installation; I can’t comprehend why it is that some people think that a VMWare image should only take up a few GB of hard drive space and half a gig of RAM; it just doesn’t work that way. Also, in a distributed software solution with multiple servers involved, firing up multiple instances of VMWare for testing and debugging should be mandatory. Cost @nominal: 512MB; @peak: 4GB. Subtotal @nominal: 2.25GB; @peak: 10.5GB.

Total peak memory (64-bit Vista SP1 which was not accounted in #1): 11+GB!!!

Now, you could argue all day long that you can “save money” by shutting down all those “peak” processes to use less RAM rather than using so much. I’d argue all day long that you are freaking insane. The 8GB I bought for my PC cost me $130 from Dell. Buy, insert, test, save money. Don’t be stupid and wasteful. Make yourself productive.

Windows 7 Beta first Impressions

by Jon Davis 14. January 2009 04:47

Everyone has already made Windows 7 first impression comments, but I had to see Windows 7 for myself, as I always do wth Windows pre-releases. So here are my first experiences. I tried the earlier PDC release, downloaded from a torrent, but I got an error after booting from the DVD saying that it could not locate an installer file.

Windows could not collect information for [OSImage] since the specified image file [install.wim] does not exist.

I chalked it up to a bad torrent download and tossed the copy.

Then Microsoft released Beta 1 this month. I tried downloading this torrent again, and the download was inturrupted. I tried to restore the download process and no seeds were found after hours. I found another torrent, and after about half a day and half-downloaded I realized Microsoft had actually released this version to the open public for anyone to download, so deleted that torrent and started download again, this time straight from Microsoft.

The next day, the download having been completed while I was sleeping, I burned it to DVD-RW and gave it a run. Guess what?

Windows could not collect information for [OSImage] since the specified image file [install.wim] does not exist.

Oh, poop. So the original download wasn't any more flawed on this part than this one is, it's something else.

I tried booting the DVD in VMWare on another PC, and it worked! Aha! It's a hardware problem, perhaps a DVD driver problem. My computer is only about one and a half years old, but the DVD drive is about four years old. I Googled around a bit for more information on this ridiculous error, and the only advice I could find were two suggestions:

  1. Some commented, "You probably found an old DVD-RW from behind a sofa. Use a new DVD-R and that'll fix it right up." Hm. Doubtful. I burned another DVD-RW (same brand, roughly the same condition) and this time I checked off the "Verify" option on my burner software, and it checked out. Still got the error. It was at this point that I tried it on VMWare, and it got past this error, so no, it's not a bad disc. I suppose it could have to do with the failure of the other drive, on the other PC, to read the disc, though. In other words, the drive might have failed, not the disc.
  2. Someone said, "I was using an old USB-attached DVD drive that the BIOS enabled me to boot the disc from, but after installing an IDE-based DVD drive in the actual computer the error went away." Well that stinks, because I'm using an IDE-based DVD drive, it's never given me any problems except that it often refuses to burn discs.

So I pondered, I'm leaning towards the #2 scenario as a clue, I know Microsoft was trying to thin down the core surface area in Windows 7 and I bet this is a lack of some drivers for my drive. But I wonder if "new" is the keyword here, not the form (IDE vs USB).

I just happened to have a external USB-based DVD drive I recently purchased at Amazon. USB, but new. Could it work? I ran to the back room and grabbed it, brought it back in, stretched it across the room to the outlet, configured the BIOS to boot from USB, and booted the Windows 7 DVD. I went to install and....... yes!! It got past the error.

So here's the first first impression: While I greatly appreciate Microsoft's attempt to slim down the core dependency set of Windows and its drivers set, in this area (CD/DVD drive support) they chopped off WAY too much. Perhaps driver support isn't the issue here, but if it is, this IS a bug. There are a LOT of people who were power users 4 years ago, who invested in the latest and greatest back then, and had no Windows version but XP, and were reluctant to switch to Vista because of the corners that Windows 7 rounded out. These years-old systems are more than adequate, surely, for Windows 7 performance-wise, but the CD/DVD drivers are right there along with USB subsystem and SATA as being most needed for success. Fix this, guys, this is a BUG, not a mere risky compromise (intentional droppage of legacy hardware support). Microsoft can't afford to lose THIS hardware.

I experienced no other hardware glitches, fortunately, and even my audio hardware was working, and the Aero experience working right from post-setup first boot. There was only one other hardware-related annoyance, and that is that my two monitors were backwards.. I had to mouse far to the right to access the left monitor. Yes, this is configurable with the Control Panel, but I got annoyed watching setup and dealing with dialog boxes, etc., while everything was backwards and the setup didn't have the Control Panel available to me. It would've been nice, I suppose, if there was one optional button during setup that brought up the Monitors dialog, but at least the Monitors dialog isn't accessed through the wholly inappropriately named (in Vista's time) "Personalization" dialog, which was SO ridiculously placed since monitor setup (resolution, monitor placement monitor drivers, color depth, etc) has little to nothing to do with personalization. Might as well rename Control Panel to "Personalizations".. but they got it, I'm glad.

The new Windows 7 is all about rounding off the corners and adding the polishing touches that Windows Vista only touched on and inspired.

  1. More ever-present Aero Glass experience, with lots of smooth animations and roll-overs.
  2. Explorer.exe got a huge overhaul with Aero and usability enhancements.
    • As is very well known, the ubiquitous taskbar that has been around through Windows 95, Windows NT 4, Windows 98, Windows ME, Windows NT, Windows 2000, Windows XP, Windows Server 2003, Windows Vista, and Windows Server 2008 (did I miss one somewhere? surely I did ..) is now no more. There is no longer a taskbar. There is a bar down there, but it's more like a "smartbar"; the Quick Launch toolbar and the taskbar have sorta merged. It's all very much inspired, no doubt, by the Mac OS X's dock, which frankly disgusts me. But so far I don't have a hatred of the Windows 7 smartbar thingmajig. I do very strongly believe that someone (i.e. Stardock), if not Microsoft themselves, will be pushing a "Windows Vista taskbar" as an add-on accessory to Windows 7, for those people who preferred it, as there is now a rather obvious market for it.
    • The awesome feature in the Windows Vista desktop compositing system that enabled Direct3D and high definition video to be managed in an already D3D desktop environment, advantages of which were only slightly touched upon by Windows key + Tab and taskbar mouseover tooltip previews, both showing these windows re-displayed in distorted, small form in realtime with no performance loss, has been expanded upon in Windows 7. I'm still discovering these, but the most obvious feature is the smartbar mouseover with Internet Explorer showing each tab and letting you pick a tab as it is rendered in real-time. I hope to find a lot more such scenarios
  3. Paint, Calculator, and Wordpad have finally been rewritten with an Office 2007 feel. We no longer have to puke on the Windows 95 versions. I didn't see if Notepad was replaced with something anywhere near the simplicity yet completeness of Notepad2. But I doubt Notepad was touched, which if not is a shame. But at least there's always Notepad2. *cough*
  4. In general, the things in Windows such as in the Control Panel that got moved around a lot in Vista and that everyone complained about, such as me complaining about Monitor settings showing up under stupid Personalization, have been rearranged again. Generally, things are just better and more thought out. Vista was a trial run in this matter, Windows 7 beta is just more thought through. There are still quirky "features" but nothing I've found so far that is just blaringly wrong. I do think that the personalization bits are now too broken apart but this might just be a style issue that needs some getting used to. Microsoft seems to be leaning more than before towards the Apple/Mozilla approach of pursuing minimalist options while burying advanced features down in an obvious "Advanced" click-trail. Themes are consolidated sets now, a little more like Win95 Plus! themes in the sense of consolidation, and not so much isolated background, color, and sound options. But those options as individual settings are still there. In fact, Sounds is now (finally) a personalization configuration, as it should be.
  5. You start off with a big fish. Literally. It's a nice painting (of a fish). But come on. It's a fish! I went to choose a different background image, and, while I could very possibly be mistaken, I think the number of background images you can choose from has been slashed by half since Vista, and the new offerings in the theme picker don't look as good. Boooo!
  6. Other people ran the numbers so I didnt do any testing, but the general consensus is that Windows 7 performs closer to Windows XP's performance than Windows Vista's performance. (Read: It's very performant.)
  7. The max system rating has been nudged up from 5.9 to 7.9. My score of 5.7 on Windows Vista went up to 5.9 in Windows 7... but given the max of 7.9 my year-and-a-half old PC is no longer 0.2 from ceiling. *sob*
  8. I was impressed that the color palettes across all themes, just like IE 8 beta on Vista, are way too bright. It's ugly and uncomfortable. It's not easily configurable to make darker, either.
  9. I haven't stressed Windows 7 yet with software to see how stable it is, but one of the first apps I downloaded was Google Chrome and that puked. All of Windows froze up while I was doing something else, too, but I don't remember what it was, and that sort of thing is something I'd expect from a Beta. 

I have one other complaint. Windows Vista and Office 2007 introduced some really nice glow animations on buttons. Windows 7 pushes the Office 2007 glow animations and transition animations everywhere. The new smartbar (taskbar replacement) has a really, really cool "you just clicked me!" gradient animation that is almost magical. It's nice, but the animations are so slow they're actually rather obnoxious. For example, in the new Calculator, if you simply hover over and click on a button, yeah, blue-gray turns amber, but mouse-away and it seems to take a full three or four seconds for it to animate back to the original color. It's artistically nice, but it's just too long, and I think it will be too distracting. It might actually produce some serious usability issues, fast-moving users are going to be forced to slow down because their "feedback loop" they're getting on the screen is going to all be just a big blur. I really don't like that. It's already making me a little nauseous. Weird huh.

I think Vista's close/maximize/minimize effects the animation timings just right in this matter. Office 2007 ribbon buttons were just over the edge in my taste (too slow), and I could be wrong but Windows 7 in various places feels like it tripled the Office 2007 animation timings (very, very slow).

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

Computers and Internet | General Technology | Operating Systems

SD Cards' Capacities Are Exploding

by Jon Davis 8. July 2008 21:16

Two years ago I really went out and splurged and got myself a halfway decent 6MP digital SLR camera, lenses, a travel case, and the largest capacity SD (Secure Digital) RAM card I could get at Best Buy. Altogether, I spent somewhere on or around $1,000. The SD card was 2GB. It was about $100 at the time, as far as I can remember. But meanwhile I've found myself using it from time to time like a tiny fingertip-held floppy disk.

One year ago, solid state drives started to take off, and 32GB capacities were about the biggest you could get. Anything more than that costed about $1,000.

Now, 32GB SD cards are available at Amazon.com for less than $300. It's this tiny little thing, and yet it has room enough to dual-boot the full installations of Vista and Linux with complete suite of developer tools. So then the questions become, do BIOS's support booting from these things? And, shouldn't these be the new solid state standard?

Might be a speed issue. "15MB" means that it transfers data at only 15MB/s, so perhaps therein lies the problem. 52x CD-ROM drives transfer at around 64 megabits per second (or 8 megabytes per second) so this transfer rate is only about double the speed of a standard high-speed CD-ROM drive. Whereas, a SanDisk SSD5000 Solid State Drive (64GB) has a transfer rate of 121MB/s, which is about eight times as fast as this little SD card.

Even so, the geek in me wonders how far one can go with shrinking UMPCs with technologies like this.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

General Technology | Computers and Internet

How My Microsoft Loyalty Is Degrading

by Jon Davis 7. June 2008 16:59

I've sat in this seat and often pronounced my discontent with Microsoft or a Microsoft technology, while still proclaiming myself to be a Microsoft enthusiast. Co-workers have often called me a Microsoft or Windows bigot. People would even give me written job recommedations pronouncing me as "one who particularly knows and understands Microsoft technologies".

But lately over the last year or two I've been suffering from malcontent, and I've lost that Microsoft spirit. I'm trying to figure out why. What went wrong? What happened?

Maybe it was Microsoft's selection of Ray Ozzie as the new Chief Software Architect. Groove (which was Ozzie's legacy) was a curious beast, but surely not a multi-billion-dollar revenue product, at best it was a network-based software experiment. Groove's migration to Microsoft under the Office umbrella would have been a lot more exciting if only it was quickly adopted into the MSDN vision and immediately given expansive and rich MSDN treatment, which it was not. Instead, it was gradually rolled in, and legacy SDK support just sort of tagged along or else "fell off" in the transition. Groove was brought in as an afterthought, not as a premier new Microsoft offering. Groove could have become the new Outlook, a rich, do-it-all software platform that brought consolidation of the team workflows and data across teams and diperate working groups, but instead it became just a simple little "IM client on steroids and then some" and I quickly abandoned it as soon as I discovered that key features such as directory sharing weren't supported on 64-bit Windows. So to bring Ozzie in and have him sit in that chair, and then have that kind of treatment of Ozzie's own Groove--Groove being only an example but an important, symbolic one--really makes me think that Microsoft doesn't know what on earth it's doing!! Even I could have sat in that chair and had a better, broader sense of software operations and retainment of vision, not that I'm jealous or would have pursued that chair. The day I heard Ozzie was selected, I immediately moaned, "Oh no, Microsoft is stuck on the network / Internet bandwagon, and has forgotten their roots, the core software platforms business!!" The whole fuzzy mesh thing that Microsoft is about to push is a really obvious example of where Microsoft is going as a result of bringing in Ozzie, and I hardly find a network mesh compelling as a software platform when non-Microsoft alternatives can so easily and readily exist.

Maybe it's Microsoft's audacity to abandon their legacies in their toolsets, such as they have done with COM and with VB6. There still remains zero support for easily building COM objects using the Visual Studio toolsets, and I will continue to grumble about this until an alternative component technology is supported by Microsoft that is native to the metal (or until I manage to get comfortable with C/C++ linked libraries, which is a skill I still have to develop 100% during my spare time, which is a real drag when there is no accountability or team support). I'm still floored by how fast Microsoft abandoned DNA for .NET -- I can completely, 100% understand it, DNA reached its limits and needed a rewrite / rethink from the bottom up, but the swappage of strategies is still a precedent that leaves me with a bad taste in my mouth. I want my personal investments in software discovery to be worth something. I'm also discouraged--the literal sense of the word, I'm losing courage and confidence--by the drastic, even if necessary, evolutionary changes Microsoft keeps doing to its supported languages. C# 2 (with stuff like generics support) is nothing like C# 1, and C# 3 (with var and LINQ) is nothing like C# 2. Now C# 4 is being researched and developed, with new support for dynamic language interop (basically, weak typing), which is as exciting as LINQ was, but I have yet to adopt even LINQ, and getting LINQ support in CLR object graphs is a notorious nightmare, not that I would know but everyone who tries it is pronouncing it as horrible and massive. Come to think of it, it's Microsoft's interop strategy that has been very frustrating. COM is not Remoting, and Remoting is not WCF. WCF isn't even supported in Mono, and so for high performance, small overhead interprocess communications, what's the best strategy really? I could use WCF today but what if WCF is gone and forgotten in five years?

Maybe it's the fact that I don't have time to browse the blogs of Microsoft's developer staff. They have a lot of folks over there, and while it's pretty tempting to complain that Microsoft "codes silently in a box", the truth is that there are some pretty good blogs being published from Microsofties, such as "If broken it is, fix it you should", albeit half of which I don't even understand without staring at it for a very long time. Incidentally, ScottGu does a really good job of "summing up" all the goings on, so thumbs-up on that one.

I think a lot of my abandonment of loyalty to Microsoft has to do with the sincerity of my open complaint about Internet Explorer, how it is the most visible and therefore most important development platform coming from Redmond but so behind-the-times and non-innovative versus the hard work that the Webkit and Mozilla teams are putting their blood, sweat, and tears over, that things like this [http://digg.com/tech_news/Time_breakdown_of_modern_web_design_PICTURE] get posted on my wall at the office, cheerily.

Perhaps it's the over-extended yet limited branding Microsoft did with Vista, making things like this [http://reviews.cnet.com/8301-13549_7-9947498-30.html] actually make me nearly shed a tear or two over what Windows branding has become. That Windows Energy background look looks neat, but it's also very forthright and "timestamped", kind of like how disco in the 70's and synth-pop in the 80's were "timestamped", they sounded neat in their day but quickly became difficult to listen to. That's what happens with too strong of an artistic statement. Incidentally, Apple's Aqua interface is also "timestamped", but at least it's not defaulting with a strong artistic statement plastered all over the entire screen. I like the Vista taskbar, but what's up with the strict black, why can't that or other visual aspects be tweaked? At least it's mostly-neutral (who wants a bright blue or yellow taskbar?), but it's still just a bit imposing IMO.

I'll bet it has to do with the horrifying use of a virtualized Program Files directory in Windows Server 2008 where the practice was unannounced. This is the sort of practice that makes it VERY difficult to trust Microsoft Windows going forward at all. If Windows is going to put things in places that are different from where I as a user told them to be placed, then we have a behavioral disconnect--software should exist to serve me and do as I command, not to protect me from myself while deceiving me.

At the end of it all, I think my degrading sense of loyalty could just be the simple fact that I really am trying to spread out and discover and appreciate what the other players are doing. I mentioned before that Mac OS X is still the ultimate, uber OS, but now that I have it, I confess, I had no idea. Steve Jobs is brilliant, and it's also profound how much of OS X is open source, basically all of the UNIXy bits, which says a lot about OSS. Mind you, parts of the Mac I genuinely do not like and have never liked, such as the single menubar, which violates very key and important rules for UI design. I also generally find it difficult to manage multiple applications running at once, for which I much prefer the Windows taskbar over the Dock if only because it's more predictable, and although it violates UI principles I prefer Alt+Tab for all windows rather than Command+Tab just for applications because every window is its own "workflow" regardless of who owns it. But, among other things, building on PostScript for rendering, for example, was a fantastic idea; on the other hand, Microsoft's ClearType would have been difficult to achieve if Windows used PostScript for rendering. Anyway, meanwhile, learning and exposing myself to UNIX/Linux based software is good for me as a growing software developer, and impossible to cleanly discover in Windows-land without using virtual machines.

In other words, the only way one can spread out and discover the non-Microsoft ways of doing things, and appreciate the process of doing so, is to stop swearing by the Microsoft way to begin with, and approach the whole thing with an open mind. In the end, the Microsoft way may still prove to be the best, but elimination of bias (on both sides) is an ideal goal to be achieved before pursuing long-term personal growth in software.

Currently rated 3.9 by 10 people

  • Currently 3.9/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

General Technology | Operating Systems | Software Development | Microsoft Windows | Mac OS X

The Down Side of the Mac Mini

by Jon Davis 22. May 2008 23:23

I've only been blogging in the context of late night fun lately, so bear with me, someday soon I'll get bored by all this and "get back to work" here on my blog.

So I got Vista installed on Boot Camp to get some experience with native Windows on my Mac Mini hardware. My hardware was the best that Apple offered in Mini form (2GHz + 2GB + 160GB). Here are the Vista performance assessment results, needless to say I'm unsurprised yet nonetheless still disappointed by the horrible graphics card on this thing. But it's still a decent little machine for its size.

Numbers are on a scale of 0-to-5.9, not 0-to-10.

Incidentally, a buddy of mine told me that he has a friend who bought an Apple TV and hacked it and it literally became a Mac Mini running Mac OS X. You can get a new Mac this way for ~$250-300!! I'd reconsider this Mac Mini if the AppleTV didn't have its limitations in display output ports.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

General Technology | Computers and Internet


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  May 2020  >>
MoTuWeThFrSaSu
27282930123
45678910
11121314151617
18192021222324
25262728293031
1234567

View posts in large calendar

RecentPosts