Two First Things Before Switching To Windows 8 (And A Rant On Single Identity)

by Jon Davis 29. November 2013 00:20

I am normally an eager if not early embracer, especially when it comes to Windows. Windows 8, however, was an exception. I've tried, numerous times, to embrace Windows 8, but it is such a different operating system that had so many problems at launch that I was far more eager to chuck it and continue calling Windows 7 "the best operating system ever". My biggest frustrations with it were just like everyone else's--no Start Menu, awkward accessibility, etc., since I don't use and don't want a tablet for computing, just like I don't want to put my fingerprints all over my computer monitor, hello!? But the dealbreaker was the fact that, among other things, I was trying to use Adobe Premiere Pro for video editing and I kept getting bluescreens, whereas in Windows 7 I didn't. And it wasn't random, it was unavoidable.

At this point I've gotten over the UI hell, since I've now spent more time with Windows Server 2012 than with Windows 8, and Server 2012 has the same UI more or less. But now that Windows 8.1 is out and seems to address so many of the UI and stability problems (I've yet to prove out the video driver stability improvements but I'm betting on the passing of a year's time), I got myself a little more comfortable with Windows 8.1 by replacing the host operating system for the VMWare VM I was using with my [now former] client. It was just a host OS, but it was a start. But I've moved on.

In my home office environment, making the switch to Windows 8.1 more permanently has been slowly creeping up my priority list particularly now that I have an opportunity of a clean slate, in multiple senses of the word, not the least of which is I had a freed up 500 GB SSD hard drive just begging to be added to my "power laptop" (a Dell) to complement its 2nd gen i7, its upgraded 16GB of RAM, and its already existing 500 GB SSD drive. Yes I know that 1TB SSD hard drives are out there now but I already have these. So last night and tonight have been spent poking around with the equivalent of a fresh machine, as this nice laptop is still only a year old.

The experience so far of switching up to Windows 8.1 has been smooth. The first things that went on this drive were Windows 8.1, Visual Studio 2013, and Adobe Creative Cloud (the latter of which has taken a back seat for my time but I plan on doing video editing on weekends again down the road). Oh, and Steam, of course. ;) All of these things and the apps that load under them were set up within hours and finished downloading and installing overnight while I was sleeping.

But in the last hour I ran into a concern that motivated me to post this blog entry about transitioning to Windows 8. It has to do with Microsoft accounts. Before I get into that, let me get one thing out of the way: the title mentions "two things", so the first is that if you hated Windows 8.0, try Windows 8.1, because the Windows 8.0 quirks are much more swallable now, which means that you won't be so severely distracted by all the nice new features, not the least of which is amazing startup time.

Now then. Microsoft Accounts. I want to like them. I want to love them. The truth is, I hate them. As a solution, it is oversold, and it is a reckless approach to a problem that many if not most people didn't have until Microsoft shoved the solution down their throats and made them have the problem that this is supposedly a solution for.

So before I go on ranting about that, here's the one other thing you should know. If you're tempted to follow the recommended procedure to setting up Windows 8.x but you want a local login/password for your computer or device and not one managed by your Microsoft Account, don't follow the recommended procedure. Look for small text for any opportunity to skip the association or creation of a Microsoft account for your device. But more importantly, once it is installed, even if it is a local user account, your account will be overhauled and converted to a Microsoft Account (managed online), and your username/password changed back to the Internet account username/password, unless you find that small text at every Microsoft Account login opportunity.

signin1signin2

If you want to use apps that require you to log into a Microsoft account, such as Microsoft Store, or the Games or Music apps, when your Windows profile is already a Microsoft Account profile then you might be logged in automatically, otherwise it'll prompt you and then all apps are associated with that account. You may not want to do that. I didn't want to do that. I don't want my Internet password to be my Windows password, and I certainly don't want my e-mail address to be visibly displayed to anyone who looks at my locked computer as the account name. I like "Jon". Why can't it just be "Jon"? Get off, Microsoft! Well, it's all good, I managed to stick with a local account profile, but as for these apps, there was a detail that I didn't notice until I did a triple-take--yep it took me three account retroactive conversions for me to notice the option. When you try to sign into a Microsoft Account enabled app like Games or Music and it begins the prompting process to convert your local user profile to a Microsoft Account profile, there is small text at the bottom that literally saves the day! It reads, "Sign into each app separately instead (not recommended)". No, of course it's not recommended because Microsoft wants your dependency upon their Microsoft Account cloud profile strategy to succeed so that they can win the cloud wars. *yawn* Seriously, if you want a local user profile and you didn't mind how in the last couple decades on Internet-enabled apps you had to reenter the same credenials or maintain a separate set of credentials, then yes this action is recommended.

I would also say that you should want a local user profile, and you should want to maintain separate credentials for different apps, and let me explain why.

I ran into this problem in Windows because everything I do with gaming is associated with one user profile, and everything I do with new software development is associated with another profile. But I want to log into Windows only once.

I don't want development and work related interests cluttering up my digital profile that exists for games, and I don't want my gaming interests cluttering up my digital profile that exists for development and work. Likewise, I don't want gamer friends poking around at my developer profile, and I don't want my developer friends poking around at my gaming history. Outside of Microsoft accounts, I have the same attitude about other social networks. I actually use each social network for a different kind of crowd. I use Facebook for family, church friends, and good acquaintenances I trust, and for occasional distracting entertainment. I use Twitter and Google+ for development and career interests and occasional entertainment and news. And so on. Now I know that Google+ has this great thing called circles, but here's the problem: you only get one sales pitch to the world, one profile, one face. I have a YouTube channel that has nothing to do with my work, I didn't want to put YouTubey stuff on it for co-workers and employers to see nor did I want work stuff to be seen by YouTube visitors. Fortunately Facebook and Google+ have "pages" identities, and that's a great start to helping with such problems, though I feel weird using "pages" for my alter egos rather than for products or organizations.

I have a problem with Microsoft making the problem worse. Having just one identity for every app and every web site is a bad, bad idea.

Even anonymity can be a good thing. I play my favorite game as "Stimpy" or as "Dingbat", I don't want people to know me by name, that's just creepy, who I really am is a non-essential element to my interaction with the application, except so long as I am uniquely identified and validated. I don't want to be known on the web site as "that one guy, with that fingerprint, who buys that food, who plays those games, who watches those videos, who expressed those comments". No. It's trending now to use Facebook identities and the like for comments to eliminate anonymity, and that to get commenters to stop being so malicious, but it’s making other problems worse. I don't want my Facebook friends and family to potentially know about my public comments on obscure articles and blog posts. No this isn't good, let me isolate my identity to my different interests, what I do over here, or over there, is none of your business!

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , ,

Opinion | Windows

I Don’t Much Get Go

by Jon Davis 24. August 2010 01:53

When Google announced their new Go programming language, I was quite excited and happy. Yay, another language to fix all the world’s problems! No more suckage! Suckage sucks! Give me a good language that doesn’t suffer suckage, so that my daily routine can suck less!

And Google certainly presented Go as “a C++ fixxer-upper”. I just watched this video of Rob Pike describing the objectives of Go, and I can definitely say that Google’s mantra still lines up. The video demonstrates a lot of evils of C++. Despite some attempts at self-edumacation, I personally cannot read nor write real-world C++ because I get lost in the gobbligook that he demonstrated.

But here’s my comment on YouTube:

100% of the examples of "why C++ and Java suck" are C++. Java's not anywhere near that bad. Furthermore, The ECMA open standard language known as C#--brought about by a big, pushy company no different in this space than Google--already had the exact same objectives as Java and now Go had, and it has actually been fundamentally evolving at the core, such as to replace patterns with language features (as seen in lambdas and extension methods). Google just wanted to be INDEPENDENT, typical anti-MS.

While trying to take Go in with great excitement, I’ve been forced to conclude that Google’s own message delivery sucks, mainly by completely ignoring some of the successful languages in the industry—namely C# (as well as some of the lesser-used excellent languages out there)—much like Microsoft’s message delivery of C#’s core objectives somewhat sucked by refraining from mentioning Java even once when C# was announced (I spent an hour looking for the C# announcement white paper from 2000/2001 to back up this memory but I can’t find it). The video linked above doesn’t even show a single example of Java suckage; it just made these painful accusations of Java being right in there with C++ as being a crappy language to work with. I haven’t coded in Java in about a decade, honestly, but back then Java was the shiznat and code was beautiful, elegant, and more or less easy to work with.

Meanwhile, Java has been evolving in some strange ways and ultimately I find it far less appetizing than C#. But where is Google’s nod to C#? Oh that’s right, C# doesn’t exist, it’s a fragment of someone’s imagination because Google considers Microsoft (C#’s maintainer) a competitor, duh. This is an attitude that should make anyone automatically skeptical of the language creator’s true intentions, and therefore of the language itself. C# actually came about in much the same way as Go did as far as trying to “fix” C++. In fact, most of the problems Go describes of C++ were the focus of C#’s objectives, along with a few thousand other objectives. Amazingly, C# has met most of its objectives so far.

If we break down Google’s objectives themselves, we don’t see a lot of meat. What we find, rather, are Google employees trying to optimize their coding workflow for previously C++ development efforts using perhaps emacs or vi (Rob even listed IDEs as a failure in modern languages). Their requirements in Go actually appear to be rather trivial. It seems that they want to write quick-and-easy C-syntax-like code that doesn’t get in the way of their business objectives, that performs very fast, and fast compilation that lets them escape out of vi to invoke gcc or whatever compiler very quickly and go back to coding. These are certainly great nice-to-haves, but I’m pretty sure that’s about it.

Consider, in contrast, .NET’s objectives a decade ago, .NET being at the core of applied C# as C# runs on the CLR (the .NET runtime):

  • To provide a very high degree of language interoperability
    • Visual Basic and C++ and Java, oh my! How do we get them to talk to each other with high performance?
    • COM was difficult to swallow. It didn’t suck because its intentions were gorgeous—to have a language-netural marshalling paradigm between runtimes—but then the same objectives were found in CORBA, and that sucked.
    • Go doesn’t even have language interoperability. It has C (and only C) function invocators. Bleh! Google is not in the real world!
  • To provide a runtime environment that completely manages code execution
    • This in itself was not a feature, it was a liability. But it enabled a great deal, namely consolidating QA resources for low-level functionality, which in turn brought about instantaneous quality and productivity on Microsoft’s part across the many languages and the tools because fewer resources had to focus on duplicate details.
    • The Mono runtime can run a lot of languages now. It is slower than C++, but not by a significant level. A C# application, fully ngen’d (precompiled to machine-level code), will execute at roughly 90-95% of C++’s and thus theoretically Go’s performance, which frankly is pretty darn good.
  • To provide a very simple software deployment and versioning model
    • A real-world requirement which Google in its corporate and web sandboxes is oblivious to, I’m not sure that Go even has a versioning model
  • To provide high-level code security through code access security and strong type checking
    • Again, a real-world requirement which Google in its corporate and web sandboxes is oblivious to, since most of their code is only exposed to the public via HTML/REST/JSON/SOAP.
  • To provide a consistent object-oriented programming model
    • It appears that Go is not an OOP language. There is no class support in Go. No objects at all, really. Just primitives, arrays, and structs. Surpriiiiise!! :D
  • To facilitate application communication by using industry standards such as SOAP and XML.
  • To simplify Web application development
    • I really don’t see Google innovating here, instead they push Python and Java on their app cloud? I most definitely don’t see this applying to Go at all.
  • To support hardware independence and portability
    • Although the implementation of this (JIT) is a liability, the objective is sound. Old-skool Linux folks didn’t get this; it’s stupid to have to recompile an application’s distribution, software should be precompiled.
    • Java and .NET are on near-equal ground here. When Java originally came about, it was the silver bullet for “Write Once, Run Anywhere”. With the successful creation and widespread adoption of the Mono runtime, .NET has the same portability. Go, however, requires recompilation. Once again, Google is not out in the real world, they live in a box (their headquarters and their exposed web).

And with the goals of C#,

  • C# language is intended to be a simple, modern, general-purpose, object-oriented programming language.
    • Go: “OOP is cruft.”
  • The language, and implementations thereof, should provide support for software engineering principles such as strong type checking, array bounds checking, detection of attempts to use uninitialized variables, and automatic garbage collection. Software robustness, durability, and programmer productivity are important.
    • Go: “Um, check, maybe. Especially productivity. Productivity means clean code.”
    • (As I always say, the more you know, the more you realize how little you know. Clearly you think you’ve got it all down, little Go.)
  • The language is intended for use in developing software components suitable for deployment in distributed environments.
    • Go: “Yeah we definitely want that. We’re Google.”
  • Source code portability is very important, as is programmer portability, especially for those programmers already familiar with C and C++.
    • Go: “Just forget C++. It’s bad. But the core syntax (curly braces) is much the same, so ... check!”
  • Support for internationalization is very important.
  • C# is intended to be suitable for writing applications for both hosted and embedded systems, ranging from the very large that use sophisticated operating systems, down to the very small having dedicated functions.
    • Go: “Check!”
    • (Yeah, except that Go isn’t an applications platform. At all. So, no. Uncheck that.)
  • Although C# applications are intended to be economical with regard to memory and processing power requirements, the language was not intended to compete directly on performance and size with C or assembly language.

Right now, Go just looks like a syntax with a few basic support classes for I/O and such. I must confess I was somewhat unimpressed by what I saw at Go’s web site (http://golang.org/) because the language does not look like much of a readability / maintainability improvement to what Java and C# offered up.

  • Go supposedly offers up memory management, but still heavily uses pointers. (C# supports pointers, too, by the way, but since pointers are not safe you must declare your code as “containing unsafe code”. Most C# code strictly uses type-checked references.)
  • Go eliminates semicolons as statement terminators. “…they are inserted automatically at the end of every line that looks like the end of a statement…” Sorry, but semicolons did not make C++ unreadable or unmaintainable
    Personally I think code without punctuation (semicolons) looks like English grammar without punctuations (no period)
    You end up with what look like run-on sentences
    Of course they’re not run-on sentences, they’re just lazily written ones with poor grammar
    wat next, lolcode?
  • “{Sample tutorial code} There is no implicit this and the receiver variable must be used to access members of the structure.” Wait, what, what? Hey, I have an idea, let’s make all functions everywhere static!
  • Actually, as far as I can tell, Go doesn’t have class support at all. It just has primitives, arrays, and structs.
  • Go uses the := operator syntax rather than the = operator for assignment. I suppose this would help eliminate the issue where people would type = where they meant to type == and destroy their variables.
  • Go has a nice “defer” statement that is akin to C#’s using() {} and try...finally blocks. It allows you to be lazy and disorganized such that late-executed code that should called after immediate code doesn’t require putting it below immediate code, we can just sprinkle late-executed code in as we go. We really needed that. (Except, not.) I think defer’s practical applicability is for some really lightweight AOP (Aspect Oriented Programming) scenarios, except that defer is a horrible approach to it.
  • Go has both new() and make(). I feel like I’m learning C++ again. It’s about those pesky pointers ...
    • Seriously, how the heck is
       
        var p *[]int = new([]int) // allocates slice structure; *p == nil; rarely useful
        var v  []int = make([]int, 100) // the slice v now refers to a new array of 100 ints

       
      .. a better solution to “improving upon” C++ with a new language than, oh I don’t know ..
       
        int[] p = null; // declares an array variable; p is null; rarely useful
        var v = new int[100]; // the variable v now refers to a new array of 100 ints

       
      ..? I’m sure I’m missing something here, particularly since I don’t understand what a “slice” is, but I suspect I shouldn’t care. Oh, nevermind, I see now that it “is a three-item descriptor containing a pointer to the data (inside an array), the length, and the capacity; until those items are initialized, the slice is nil.” Great. More pointer gobbligook. C# offers richly defined System.Array and all this stuff is transparent to the coder who really doesn’t need to know that there are pointers, somewhere, associated with the reference to your array, isn’t that the way it all should be? Is it really necessary to have a completely different semantic (new() vs. make())? Ohh yeah. The frickin pointer vs. the reference.
  • I see Go has a fmt.Printf(), plus a fmt.Fprintf(), plus a fmt.Sprintf(), plus Print() plus Println(). I’m beginning to wonder if function overloading is missing in Go. I think it is; http://golang.org/search?q=overloading
  • Go has “goroutines”. It’s basically, “go func() { /* do stuff */ }” and it will execute the code as a function on the fly, in parallel. In C# we call these anonymous delegates, and delegates can be passed along to worker thread pool threads on the fly with only one line of code, so yes, it’s supported. F# (a young .NET sibling of C#) has this, too, by the way, and its support for inline anonymous delegate declarations and spawning them off in parallel is as good as Go’s.
  • Go has channels for communication purposes. C# has WCF for this which is frankly a mess. The closest you can get to Go on the CLR as far as channels go is Axum, which is variation of C# with rich channel support.
  • Go does not throw exceptions. It panics, from which it might recover.

While I greatly respect the contributions Google has made to computing science, and their experience in building web-scalable applications (that, frankly, typically suck at a design level when they aren’t tied to the genius search algorithms), and I have no doubt that Google is an experienced web application software developer with a lot of history, honestly I think they are clueless when it comes to real-world applications programming solutions. Microsoft has been demonized the world over since its beginnings, but one thing they and few others have is some serious, serious real-world experience with applications. Between all of the web sites and databases and desktop applications combined everywhere on planet Earth through the history of man, Microsoft has probably been responsible for the core applications plumbing for the majority of it all, followed perhaps by Oracle. (Perhaps *nix and applications and services that run on it has been the majority; if nothing else, Microsoft has most certainly still had the lead in software as a company, to which my point is targeted.)

It wasn’t my intention to make this a Google vs. Microsoft debate, but frankly the fact that Go presentations neglect C# severely causes question to Go’s trustworthiness.

In my opinion, a better approach to what Google was trying to do with Go would be to take a popular language, such as C#, F#, or Axum, and break it away from the language’s implementation libraries, i.e. the .NET platform’s BCL, replacing them with the simpler constructs, support code, and lightweight command-line tooling found in Go, and then wrap the compiler to force it to natively compile to machine code (ngen). Honestly, I think that would be both a) a much better language and runtime than Go because it would offer most of the benefits of Go but in a manner that retains most or all of the advantages of the selected runtime (i.e. the CLR’s and C#’s multitude of advantages over C/C++), but also b) a flop, and a waste of time, because C# is not really broken. Coupled with F#, et al, our needs are quite well met. So thanks anyway, Google, but, really, you should go now.

Currently rated 3.2 by 6 people

  • Currently 3.166667/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , ,

C# | F# | General Technology | Mono | Opinion | Software Development

Nine Reasons Why 8GB Is Only Just Enough (For A Professional Business Software Developer)

by Jon Davis 13. February 2009 21:29

Today I installed 8GB on my home workstation/playstation. I had 8GB lying around already from a voluntary purchase for a prior workplace (I took my RAM back and put the work-provided RAM back in before I left that job) but the brand of RAM didn’t work correctly on my home PC’s motherboard. It’s all good now though, some high quality performance RAM from OCZ and my Windows 7 system self-rating on the RAM I/O jumped from 5.9 to 7.2.

At my new job I had to request a RAM upgrade from 2GB to 4GB. (Since it’s 32-bit XP I couldn’t go any higher.) I asked about when 64-bit Windows Vista or Windows 7 would be put on the table for consideration as an option for employees, I was told “there are no plans for 64-bit”.

The same thing happened with my last short-term gig. Good God, corporate IT folks everywhere are stuck in the year 2002. I can barely function at 4GB, can’t function much at all at 2GB.

By quadrupling the performance of your employee's system, you’d effectively double the productivity of your employee; it’s like getting a new employee for free.

If you are a multi-role developer and aren’t already saturating at least 4GB of RAM you are throwing away your employer’s money, and if you are IT and not providing at least 4GB RAM to developers and actively working on adding corporate support for 64-bit for employees’ workstations you are costing the company a ton of money due to productivity loss!! I don’t know how many times I’ve seen people restart their computers or sit and wait for 2 minutes for Visual Studio to come up because their machine is bogged down on a swap file. That was “typical” half a decade ago, but it’s not acceptable anymore. The same is true of hard drive space. Fast 1 Terabyte hard drives are available for less than $100 these days, there is simply no excuse. For any employee who makes more than X (say, $25,000), for Pete’s sake, throw in an extra $1000-$2000 or so more and get the employee two large (24-inch) monitors, at least 1TB hard drive(s) (ideally 4 drives in a RAID-0+1 array), 64-bit Windows Server 2008 / Windows Vista / Windows 7, a quad-core CPU, and 8 GB of some high performance (800+ GHz) RAM. It’s not that that’s another $2,000 or so to lose; it’s that just $2,000 will save you many thousands more dough. By quadrupling the performance of your employee's system, you’d effectively double the productivity of your employee; it’s like getting a new employee for free. And if you are the employee, making double of X (say, more than $50,000), and if your employer could somehow allow it (and they should, shame on them if they don’t and they won’t do it themselves), you should go out and get your own hardware upgrades. Make yourself twice as productive, and earn your pay with pride.

In a business environment, whether one is paid by the hour or salaried (already expected to work X hours a week, which is effectively loosely translated to hourly anyway), time = money. Period. This is not about developers enjoying a luxury, it’s about them saving time and employers saving money.

Note to the morons who argue “this is why developers are writing big, bloated software that suck up resources” .. Dear moron, this post is from the perspective of an actual developer’s workstation, not a mere bit-twiddling programmer—a developer, that is, who wears many hats and must not just write code but manage database details, work with project plans, document technical details, electronically collaborate with teammates, test and debug, etc., all in one sitting. Nothing in here actually recommends or even contributes to writing big, bloated software for an end user. The objective is productivity, your skills as a programmer are a separate concern. If you are producing bad, bloated code, the quality of the machine on which you wrote the code has little to nothing to contribute to that—on the contrary, a poor developer system can lead to extremely shoddy code because the time and patience required just to manage to refactor and re-test become such a huge burden. If you really want to test your code on a limited machine, you can rig VMWare / VirtualPC / VirtualBox to temporarily run with lesser RAM, etc. You shouldn’t have to punish yourself with poor productivity while you are creating the output. Such punishment is far more monetarily expensive than the cost of RAM.

I can think of a lot of reasons for 8+ GB RAM, but I’ll name a handful that matter most to me.

  1. Windows XP / Server 2003 alone takes up half a gigabyte of RAM (Vista / Server 2008 takes up double that). Scheduled tasks and other processes cause the OS to peak out at some 50+% more. Cost: 512-850MB. Subtotal @nominal: ~512MB; @peak: 850MB
  2. IIS isn’t a huge hog but it’s a big system service with a lot of responsibility. Cost: 50-150. Subtotal @nominal: ~550MB; @peak 1GB.
  3. Microsoft Office and other productivity applications should need to be used more than one at a time, as needed. For more than two decades, modern computers have supported a marvelous feature called multi-tasking. This means that if you have Outlook open, and you double-click a Microsoft Word attachment, and upon reading it you realize that you need to update your Excel spreadsheet, which in your train of thought you find yourself updating an Access database, and then you realize that these updates result in a change of product features so you need to reflect these details in your PowerPoint presentation, you should have been able to open each of these applications without missing a beat, and by the time you’re done you should be able to close all these apps in no more than one passing second per click of the [X] close button of each app. Each of these apps takes up as much as 100MB of RAM, Outlook typically even more, and Outlook is typically always open. Cost: 150-1GB. Subtotal @nominal: 700MB; @peak 2GB.
  4. Every business software developer should have his own copy of SQL Server Developer Edition. Every instance of SQL Server Developer Edition takes up a good 25MB to 150MB of RAM just for the core services, multiplied by each of the support services. Meanwhile, Visual Studio 2008 Pro and Team Edition come with SQL Server 2005 Express Edition, not 2008, so for some of us that means two installations of SQL Server Express. Both SQL Server Developer Edition and SQL Server Express Edition are ideal to have on the same machine since Express doesn’t have all the features of Developer and Developer doesn’t have the flat-file support that is available in Express. SQL Server sitting idly costs a LOT of CPU, so quad core is quite ideal. Cost: @nominal: 150MB, @peak 512MB. Subtotal @nominal: 850MB; @peak: 2.5GB. We haven’t even hit Visual Studio yet.
  5. Except in actual Database projects (not to be confused with code projects that happen to have database support), any serious developer would use SQL Server Management Studio, not Visual Studio, to access database data and to work with T-SQL tasks. This would be run alongside Visual Studio, but nonetheless as a separate application. Cost: 250MB. Subtotal @nominal: 1.1GB; @peak: 2.75GB.
  6. Visual Studio itself takes the cake. With ReSharper and other popular add-ins like PowerCommands installed, Visual Studio just started up takes up half a gig of RAM per instance. Add another 250MB for a typical medium-size solution. And if you, like me lately, work in multiple branches and find yourself having to edit several branches for different reasons, one shouldn’t have to close out of Visual Studio to open the next branch. That’s productivity thrown away. This week I was working with three branches; that’s 3 instances. Sample scenario: I’m coding away on my sandbox branch, then a bug ticket comes in and I have to edit the QA/production branch in an isolated instance of Visual Studio for a quick fix, then I get an IM from someone requesting an immediate resolution to something in the developer branch. Lucky I didn’t open a fourth instance. Eventually I can close the latter two instances down and continue with my sandbox environment. Case in point: Visual Studio costs a LOT of RAM. Cost @nominal 512MB, @peak 2.25GB. Subtotal @nominal: 1.6GB; @peak: 5GB.
  7. Your app being developed takes up RAM. This could be any amount, but don’t forget that Visual Studio instantiates independent web servers and loads up bloated binaries for debugging. If there are lots of services and support apps involved, they all stack up fast. Cost @nominal: 50MB, @peak 750MB. Subtotal @nominal: 1.65GB; @peak: 5.75GB.
  8. Internet Explorer and/or your other web browsers take up plenty of RAM. Typically 75MB for IE to be loaded, plus 10-15MB per page/tab. And if you’re anything like me, you’ll have lots and lots and LOTS of pages/tabs by the end of the day; by noon I typically end up with about four or five separate IE windows/processes, each with 5-15 tabs. (Mind you, all or at least most of them are work-related windows, such as looking up internal/corporate documents on the intranet or tracking down developer documentation such as API specs, blogs, and forum posts.) Cost @nominal: 100MB; @peak: 512MB. Subtotal @nominal: 1.75GB; @peak: 6.5GB.
  9. No software solution should go untested on as many platforms as is going to be used in production. If it’s a web site, it should be tested on IE 6, IE 7, and IE 8, as well as current version of Opera, Safari 3+, Firefox 1.5, Firefox 2, and Firefox 3+. If it’s a desktop app, it should be tested on every compatible version of the OS. If it’s a cross-platform compiled app, it should be tested on Windows, Mac, and Linux. You could have an isolated set of computers and/or QA staff to look into all these scenarios, but when it comes to company time and productivity, the developer should test first, and he should test right on his own computer. He should not have to shutdown to dual-boot. He should be using VMWare (or Virtual PC, or VirtualBox, etc). Each VMWare instance takes up the RAM and CPU of a normal system installation; I can’t comprehend why it is that some people think that a VMWare image should only take up a few GB of hard drive space and half a gig of RAM; it just doesn’t work that way. Also, in a distributed software solution with multiple servers involved, firing up multiple instances of VMWare for testing and debugging should be mandatory. Cost @nominal: 512MB; @peak: 4GB. Subtotal @nominal: 2.25GB; @peak: 10.5GB.

Total peak memory (64-bit Vista SP1 which was not accounted in #1): 11+GB!!!

Now, you could argue all day long that you can “save money” by shutting down all those “peak” processes to use less RAM rather than using so much. I’d argue all day long that you are freaking insane. The 8GB I bought for my PC cost me $130 from Dell. Buy, insert, test, save money. Don’t be stupid and wasteful. Make yourself productive.

Why Windows and IIS Developers Need To Try PHP, Ruby, Python, Pure Javascript, Or Something Else

by Jon Davis 5. January 2009 05:04

I've argued frequently here before that rather than being a master of one thing, it's better to be knowledgeable of many things, expert in a few. This is not the same as being a jack-of-all-trades because one would suck at all those things, but if you're better at some things, rather than a master of one thing, you might actually be better than the master of the one thing, because your experience is diversified.

Perfect analogy...

http://www.candychang.com/design/pages/milk_and_muffin.htm

"[Trying other things makes your specialty] taste better because now you know what makes it special ... and your appeciation of [your specialty] gives you a better understanding of all the other [alternative technologies]."

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Opinion | Software Development | Web Development

Keys To Web 3.0 Design and Development When Using ASP.NET

by Jon Davis 9. October 2008 05:45

You can skip the following boring story as it's only a prelude to the meat of this post.

As I've been sitting at my job lately trying to pull off my web development ninja skillz I feel like my hands tied behind my back because I'm there temporarily as a consultant to add features, not to refactor. The current task at hand involves adding a couple additional properties to key user component in a rich web application. This requires a couple extra database columns and a bit of HTML interaction to collect the new settings. All in all, about 15 minutes, right? Slap in the columns into the database, update the SQL SELECT query, throw on a couple ASP.NET controls, add some data binding, and you're done, right? Surely not more than an hour, right?

Try three hours, just to add the columns to the database! The HTML is driven by a data "business object" that isn't a business object at all, just a data layer that has method stubs for invoking stored procedures and returns only DataTables. There are four types of "objects" based on the table being modified, and each type has its own stored procedure that ultimately proxies out to the base type's stored procedure, so that means at least five stored procedures for each CRUD operation affected by the addition. Overall, about 10 database objects were touched and as many C# data layer objects as well. Add to that a proprietary XML file that is used to map these data objects' DataTable columns, both in (parameters) and out (fields).

That's just the data. Then on the ASP.NET side, to manage event properties there's a control that's inheriting another control that is contained by another control that is contained by two other controls before it finally shows up on the page. Changes to the properties are a mix of hard-wired bindings to the lowest base control (properties) for some of the user's settings, and for most of the rest of the user's settings on the same page, CLR events (event args) are raised by the controls and are captured by the page that contains it all. There are at least five different events, one for each "section" of properties. To top it off, in my shame, I added both another "SaveXXX" event, plus I added another way of passing the data--I use a series of FindControl(..) invocation chains to get to the buried control and fetch the setting I wanted to add to the database and/or translate back out to the view. (I would have done better than to add more kludge, but I couldn't without being enticed to refactor, which I couldn't do, it's a temporary contract and the boss insisted that I not.)

To top it all off, just the simple CRUD stored procedures alone are slower than an eye blink, and seemingly showstopping in code. It takes about five seconds to handle each postback on this page, and I'm running locally (with a networked SQL Server instance).

The guys who architected all this are long gone. This wasn't the first time I've been baffled by the output of an architect who tries too hard to do the architectural deed while forgetting that his job is not only to be declarative on all layers but also to balance it with performance and making the developers' lives less complicated. In order for the team to be agile, the code must be easily adaptable.

Plus the machine I was given is, just like everyone else's, a cheap Dell with 2GB RAM and a 17" LCD monitor. (At my last job, which I quit, I had a 30-inch monitor and 4GB RAM which I replaced without permission and on my own whim with 8GB.) I frequently get OutOfMemoryExceptions from Visual Studio when trying to simply compile the code.

There are a number of reasons I can pinpoint to describe exactly why this web application has been so horrible to work with. Among them,

  • The architecture violates the KISS principle. The extremities of the data layer prove to be confounding, and buring controls inside controls (compositing) and then forking instances of them are a severe abuse of ASP.NET "flexibility".
  • OOP principles were completely ignored. Not a single data layer inherits from another. There is no business object among the "Business" objects' namespace, only data invocation stubs that wrap stored procedure execution with a transactional context, and DataTables for output. No POCO objects to represent any of the data or to reuse inherited code.
  • Tables, not stored procedures, should be used in basic CRUD operations. One should use stored procedures only in complex operations where multiple two-way queries must be accomplished to get a job done. Good for operations, bad for basic data I/O and model management.
  • Way too much emphasis on relying on Web Forms "featureset" and lifcycle (event raising, viewstate hacking, control compositing, etc.) to accomplish functionality, and way too little understanding and utilization of the basic birds and butterflies (HTML and script).
  • Way too little attention to developer productivity by failure to move the development database to the local switch, have adequate RAM, and provide adequate screen real estate to manage hundreds of database objects and hundreds of thousands of lines of code.
  • Admittance of the development manager of the sadly ignorant and costly attitude that "managers don't care about cleaning things up and refactoring, they just want to get things done and be done with it"--I say "ignorant and costly" because my billable hours were more than quadrupled versus having clean, editable code to begin with.
  • New features are not testable in isolation -- in fact, they aren't even compilable in isolation. I can compile and do lightweight testing of the data layer without more than a few heartbeats, but it takes two minutes to compile the web site just to see where my syntax or other compiler-detected errors are in my code additions (and I haven't been sleeping well lately so I'm hitting the Rebuild button and monitoring the Errors window an awful lot). 

Even as I study (ever so slowly) for MCPD certification for my own reasons while I'm at home (spare me the biased anti-Microsoft flames on that, I don't care) I'm finding that Microsoft end developers (Morts) and Microsofties (Redmondites) alike are struggling with the bulk of their own technology and are heaping up upon themselves the knowledge of their own infrastructure before fully appreciating the beauty and the simplicity of the pure basics. Fortunately, Microsoft has had enough, and they've been long and hard at the drawing board to reinvent ASP.NET with ASP.NET MVC. But my interests are not entirely, or not necessarily, MVC-related.

All I really want is for this big fat pillow to be taken off of my face, and all these multiple layers of coats and sweatshirts and mittens and ski pants and snow boots to be taken off me, so I can stomp around wearing just enough of what I need to be decent. I need to breathe, I need to move around, and I need to be able to do some ninja kung fu.

These experiences I've had with ASP.NET solutions often make me sit around brainstorming how I'd build the same solutions differently. It's always easy to be everyone's skeptic, and it requires humility to acknowledge that just because you didn't write something or it isn't in your style or flavor doesn't mean it's bad or poorly produced. Sometimes, however, it is. And most solutions built with Web Forms, actually, are.

My frustration isn't just with Web Forms. It's with corporations that build upon Internet Explorer rather than HTML+Javascript. It's with most ASP.NET web applications adopting a look-and-feel that seem to grow in a box that is controlled by Rendmondites, with few artistic deviators rocking the boat. It's with the server-driven view management rather than smart clients in script and markup. It's with nearly all development frameworks that cater towards the ASP.NET crowd being built for IIS (the server) and not for the browser (the client).

I intend to do my part, although intentions are easy, actions can be hard. But I've helped design an elaborate client-side MVC framework before, with great pride, I'm thinking about doing it again and implementing myself (I didn't have the luxury of real-world implementation [i.e. a site] last time, I only helped design it and wrote some of the core code) and open sourcing it for the ASP.NET crowd. I'm also thinking about building a certain kind of ASP.NET solution I've frequently needed to work with (CRM? CMS? Social? something else? *grin* I won't say just yet), that takes advantage of certain principles.

What principles? I need to establish these before I even begin. These have already worked their way into my head and my attitude and are already an influence in every choice I make in web architecture, and I think they're worth sharing.

1. Think dynamic HTML, not dynamically generated HTML. Think of HTML like food; do you want your fajitas sizzling when when it arrives and you have to use a fork and knife while you enjoy it fresh on your plate, or do you prefer your food preprocessed and shoved into your mouth like a dripping wet ball of finger-food sludge? As much as I love C#, and acknowledge the values of Java, PHP, Ruby on Rails, et al, the proven king and queen of the web right now, for most of the web's past, and for the indefinite future are the HTML DOM and Javascript. This has never been truer than now with jQuery, MooTools, and other (I'd rather not list them all) significant scripting libraries that have flooded the web development industry with client-side empowerment. Now with Microsoft adopting jQuery as a core asset for ASP.NET's future, there's no longer any excuse. Learn to develop the view for the client, not for the server.

Why? Because despite the fact that client-side debugging tools are less evolved than on the server (no edit-and-continue in VS, for example, and FireBug is itself buggy), the overhead of managing presentation logic in a (server) context that doesn't relate to the user's runtime is just too much to deal with sometimes. Server code often takes time to recompile, whereas scripts don't typically require compilation at all. While in theory there is plenty of control on the server to debug what's needed while you have control of it in your own predictable environment, in practice there are just too many stop-edit-retry cycles going on in server-oriented view management.

And here's why that is. The big reason to move view to the client is because developers are just writing WAY too much view, business, and data mangling logic in the same scope and context. Client-driven view management nearly forces the developer to isolate view logic from data. In ASP.NET Web Forms, your 3 tiers are database, data+view mangling on the server, and finally whatever poor and unlucky little animal (browser) has to suffer with the resulting HTML. ASP.NET MVC changes that to essentially five tiers: the database, the models, the controller, the server-side view template,and finally whatever poor and unlucky little animal has to suffer with the resulting HTML. (Okay, Microsoft might be changing that with adopting jQuery and promising a client solution, we'll see.)

Most importantly, client-driven views make for a much richer, more interactive UIX (User Interface/eXperience); you can, for example reveal/hide or enable/disable a set of sub-questions depending on if the user checks a checkbox, with instant gratification. The ASP.NET Web Forms model would have it automatically perform a form post to refresh the page with the area enabled/disabled/revealed/hidden depending on the checked state. The difference is profound--a millisecond or two versus an entire second or two.

2. Abandon ASP.NET Web Forms. RoR implements a good model, try gleaning from that. ASP.NET MVC might be the way of the future. But frankly, most of the insanely popular web solutions on the Internet are PHP-driven these days, and I'm betting that's because PHP is on a similar coding model as ASP classic. No MVC stubs. No code-behinds. All that stuff can be tailored into a site as a matter of discipline (one of the reasons why PHP added OOP), but you're not forced into a one-size-fits-all paradigm, you just write your HTML templates and go.

Why? Web Forms is a bear. Its only two advantages are the ability to drag-and-drop functionality onto a page and watch it go, and premier vender (Microsoft / Visual Studio / MSDN) support. But it's difficult to optimize, difficult to templatize, difficult to abstract away from business logic layers (if at least difficult in that it requires intentional discipline), and puts way too much emphasis on the lifecycle of the page hit and postback. Look around at the ASP.NET web forms solutions out there. Web Forms is crusty like Visual Basic is crusty. It was created for, and is mostly used for, corporate grunts who use B2B (business-to-business) or internal apps. The rest of the web sites who use ASP.NET Web Forms suffer greatly from the painful code bloat of the ASP.NET Web Forms coding model and the horrible end-user costs of page bloat and round-trip navigation.

Kudos to Guthrie, et al, who developed Web Forms, it is a neat technology, but it is absolutely NOT a one-size-fits-all platform any more than my winter coat from Minnesota is. So congratulations to Microsoft for picking up the ball and working on ASP.NET MVC.

3. Use callbacks, not postbacks. Sometimes a single little control, like a textbox that behaves like an auto-suggest combobox, just needs a dedicated URL to perform an AJAX query against. But also, in ASP.NET space, I envision the return of multiple <form>'s, with DHTML-based page MVC controllers powering them all, driving them through AJAX/XmlHttpRequest.

Why? Clients can be smart now. They should do the view processing, not the server. The browser standard has finally arrived to such a place that most people have browsers capable of true DOM/DHTML and Javascript with JSON and XmlHttpRequest support.

Clearing and redrawing the screen is as bad as 1980s BBS ANSI screen redraws. It's obsolete. We don't need to write apps that way. Postbacks are cheap; don't be cheap. Be agile; use patterns, practices, and techniques that save development time and energy while avoiding the loss of a fluid user experience. <form action="someplace" /> should *always* have an onsubmit handler that returns false but runs an AJAX-driven post. The page should *optionally* redirect, but more likely only the area of the form or a region of the page (a containing DIV perhaps) should be replaced with the results of the post. Retain your header and sidebar in the user experience, and don't even let the content area go white for a split second. Buffer the HTML and display it when ready.

ASP.NET AJAX has region refreshes already, but still supports only <form runat="server" /> (limit 1), and the code-behind model of ASP.NET AJAX remains the same. Without discipline of changing from postback to callback behavior, it is difficult to isolate page posts from componentized view behavior. Further, <form runat="server" /> should be considered deprecated and obsolete. Theoretically, if you *must* have ViewState information you can drive it all with Javascript and client-side controllers assigned to each form.

ASP.NET MVC can manage callbacks uniformly by defining a REST URL suffix, prefix, or querystring, and then assigning a JSON handler view to that URL, for example ~/employee/profile/jsmith?view=json might return the Javascript object that represents employee Joe Smith's profile. You can then use Javascript to pump HTML generated at the client into view based on the results of the AJAX request.

4. By default, allow users to log in without accessing a log in page. A slight tangent (or so it would seem), this is a UI design constraint, something that has been a pet peeve of mine ever since I realized that it's totally unnecessary to have a login page. If you don't want to put ugly Username/Password fields on the header or sidebar, use AJAX.

Why? Because if a user visits your site and sees something interesting and clicks on a link, but membership is required, the entire user experience is inturrupted by the disruption of a login screen. Instead, fade out to 60%, show a DHTML pop-up login, and fade in and continue forward. The user never leaves the page before seeing the link or functionality being accessed.

Imagine if Microsoft Windows' UAC, OS X's keyring, or GNOME's sudo auth, did a total clear-screen and ignored your action whenever it needed an Administrator password. Thankfully it doesn't work that way; the flow is paused with a small dialogue box, not flat out inturrupted.

5. Abandon the Internet Explorer "standard". This goes to corporate folks who target IE. I am not saying this as an anti-IE bigot. In fact, I'm saying this in Internet Explorer's favor. Internet Explorer 8 (currently not yet released, still in beta) introduces better web standards support than previous versions of Internet Explorer, and it's not nearly as far behind the trail of Firefox and WebKit (Safari, Chrome) as Internet Explorer 7 is. With this reality, web developers can finally and safely build W3C-compliant web applications without worrying too much about which browser vendor the user is using, and instead ask the user to get the latest version

Why? Supporting multiple different browsers typically means writing more than one version of a view. This means developer productivity is lost. That means that features get stripped out due to time constraints. That means that your web site is crappier. That means users will be upset because they're not getting as much of what they want. That means less users will come. And that means less money. So take on the "Write once, run anywhere" mantra (which was once Java's slogan back in the mid-90s) by writing W3C-compliant code, and leave behind only those users who refuse to update their favorite browsers, and you'll get a lot more done while reaching a broader market, if not now then very soon, such as perhaps 1/2 yr after IE 8 is released. Use Javascript libraries like jQuery to handle most of the browser differences that are left over, while at the same time being empowered to add a lot of UI functionality without postbacks. (Did I mention that postbacks are evil?)

6. When hiring, favor HTML+CSS+Javascript gurus who have talent and an eye for good UIX (User Interface/eXperience) over ASP.NET+database gurus. Yeah! I just said that!

Why? Because the web runs on the web! Surprisingly, most employers don't have any idea and have this all upside down. They favor database gurus as gods and look down upon UIX developers as children. But the fact is I've seen more ASP.NET+SQL guys who halfway know that stuff and know little of HTML+Javascript than I have seen AJAX pros, and honestly pretty much every AJAX pro is bright enough and smart enough to get down and dirty with BLL and SQL when the time comes. Personally, I can see why HTML+CSS+Javascript roles are paid less (sometimes a lot less) than the server-oriented developers--any script kiddie can learn HTML!--but when it comes to professional web development they are ignored WAY too much because of only that. The web's top sites require extremely brilliant front-end expertise, including Facebook, Hotmail, Gmail, Flickr, YouTube, MSNBC--even Amazon.com which most prominently features server-generated content but yet also reveals a significant amount of client-side expertise.

I've blogged it before and I'll mention it again, the one, first, and most recent time I ever had to personally fire a co-worker (due to my boss being out of town and my having authority, and my boss requesting it of me over the phone) was when I was working with an "imported" contractor who had a master's degree and full Microsoft certification, but could not copy two simple hyperlinks with revised URLs within less than 5-10 minutes while I watched. The whole office was in a gossipping frenzy, "What? Couldn't create a hyperlink? Who doesn't know HTML?! How could anyone not know HTML?!", but I realized that the core fundamentals have been taken for granted by us as technologists to such an extent that we've forgotten how important it is to value it in our hiring processes.

7.  ADO.NET direct SQL code or ORM. Pick one. Don't just use data layers. Learn OOP fundamentals. The ActiveRecord pattern is nice. Alternatively, if it's a really lightweight web solution, just go back to wring plain Jane SQL with ADO.NET. If you're using C# 3.0, which of course you are in the context of this blog entry, then use LINQ-to-SQL or LINQ-to-Entities. On the ORM side, however, I'm losing favor with some of them because they often cater to a particular crowd. I'm slow to say "enterprise" because, frankly, too many people assume the word "enterprise" for their solutions when they are anything but. Even web sites running at tens of thousands of hits a day and generating hundreds of thousands of dollars of revenue every month don't necessarily mean "enterprise". The term "enterprise" is more of a people management inference than a stability or quality effort. It's about getting many people on your team using the same patterns and not having loose and abrupt access to thrash the database. For that matter, the corporate slacks-and-tie crowd of ASP.NET "Morts" often can relate to "enterprise", and not even realize it. But for a very small team (10 or less) and especially for a micro ISV (developers numbering 5 or less) with a casual and agile attitude, take the word "enterprise" with a grain of salt. You don't need a gajillion layers of red tape. For that matter, though, smaller teams are usually small because of tighter budgets, and that usually means tighter deadlines, and that means developer productivity must reign right there alongside stability and performance. So find an ORM solution that emphasizes productivity (minimal maintenance and easily adaptable) and don't you dare trade routine refactoring for task-oriented focus as you'll end up just wasting everyone's time in the long run. Always include refactoring to simplicity in your maintenance schedule.

Why? Why go raw with ADO.NET direct SQL or choose an ORM? Because some people take the data layer WAY too far. Focus on what matters; take the effort to avoid the effort of fussing with the data tier. Data management is less important than most teams seem to think. The developer's focus should be on the UIX (User Interface/eXperience) and the application functionality, not how to store the data. There are three areas where the typical emphasis on data management is agreeably important: stability, performance (both of which are why we choose SQL Server over, oh, I dunno, XML files?) and queryability. The latter is important both for the application and for decision makers. But a fourth requirement is routinely overlooked, and that is the emphasis on being able to establish a lightweight developer workflow of working with data so that you can create features quickly and adapt existing code easily. Again, this is why a proper understanding of OOP, how to apply it, when to use it, etc, is emphasized all the time, by yours truly. Learn the value of abstraction and inheritence and of encapsulating interfaces (resulting in polymorphism). Your business objects should not be much more than POCO objects with application-realized properties. Adding a new simple data-persisted object, or modifying an existing one with, say, a new column, should not take more than a minute of one's time. Spend the rest of that time instead on how best to impress the user with a snappy, responsive user interface.

8. Callback-driven content should derive equally easily from your server, your partner's site, or some strange web service all the way in la-la land. We're aspiring for Web 3.0 now, but what happened to Web 2.0? We're building on top of it! Web 2.0 brought us mashups, single sign-ons, and cross-site social networking. FaceBook Applications are a classic demonstration of an excelling student of Web 2.0 now graduating and turning into a Web 3.0 student. Problem is, keeping the momentum going, who's driving this rig? If it's not you, you're missing out on the 3.0 vision.

Why? Because now you can. Hopefully by now you've already shifted the bulk of the view logic over to the client. And you've empowered your developers to focus on the front-end UIX. Now, though, the client view is empowered to do more. It still has to derive content from you, but in a callback-driven architecture, the content is URL-defined. As long as security implications are resolved, you now have the entire web at your [visitors'] disposal! Now turn it around to yourself and make your site benefit from it!

If you're already invoking web services, get that stuff off your servers! Web services queried from the server cost bandwidth and add significant time overhead before the page is released from the buffer to the client. The whole time you're fetching the results of a web service you're querying, the client is sitting there looking at a busy animation or a blank screen. Don't let that happen! Throw the client a bone and let it fetch the external resources on its own.

9. Pay attention to the UIX design styles of the non-ASP.NET Web 2.0/3.0 communities. There is such a thing as a "Web 2.0 look", whether we like to admit it or not; we web developers evolved and came up with innovations worth standardizing on, why can't designers evolve and come up with visual innovations worth standardizing on? If the end user's happiness is our goal, how are features and stable and performant code more important than aesthetics and ease of use? The problem is, one perspective of what "the Web 2.0 look" actually looks like is likely very different from another's or my own. I'm not speaking of heavy gloss or diagonal lines. I most certainly am not talking about the "bubble gum" look. (I jokingly mutter "Let's redesign that with diagonal lines and beveled corners!" now and then, but when I said that to my previous boss and co-worker, both of whom already looked down on me WAY more than they deserved to do, neither of them understood that I was joking. Or, at least, they didn't laugh or even smile.) No, but I am talking about the use of artistic elements, font choices and font styles, and layout characteristics that make a web site stand out from the crowd as being highly usable and engaging. 

Let's demonstrate, shall we? Here are some sites and solutions that deserve some praise. None of them are ASP.NET-oriented.

  • http://www.javascriptmvc.com/ (ugly colors but otherwise nice layout and "flow"; all functionality driven by Javascript; be sure to click on the "tabs")
  • http://www.deskaway.com/ (ignore the ugly logo but otherwise take in the beauty of the design and workflow; elegant font choice)
  • http://www.mosso.com/ (I really admire the visual layout of this JavaServer Pages driven site; fortunately I love the fact that they support ASP.NET on their product)
  • http://www.feedburner.com/ (these guys did a redesign not too terribly long ago; I really admire their selective use of background patterns, large-font textboxes, hover effects, and overall aesthetic flow)
  • http://www.phpbb.com/ (stunning layout, rock solid functionality, universal acceptance)
  • http://www.joomla.org/ (a beautiful and powerful open source CMS)
  • http://goplan.org/ (I don't like the color scheme but I do like the sheer simplicity
  • .. for that matter I also love the design and simplicity of http://www.curdbee.com/)

Now here are some ASP.NET-oriented sites. They are some of the most popular ASP.NET-driven sites and solutions, but their design characteristics, frankly, feel like the late 90s.

  • http://www.dotnetnuke.com/ (one of the most popular CMS/portal options in the open source ASP.NET community .. and, frankly, I hate it)
  • http://www.officelive.com/ (sign in and discover a lot of features with a "smart client" feel, but somehow it looks and feels slow, kludgy, and unrefined; I think it's because Microsoft doesn't get out much)
  • http://communityserver.com/ (it looks like a step in the right direction, but there's an awful lot of smoke and mirrors; follow the Community link and you'll see the best of what the ASP.NET community has to offer in the way of forums .. which frankly doesn't impress me as much as phpBB)
  • http://www.dotnetblogengine.net/ (my blog uses this, I like it well enough, but it's just one niche, and that's straight-and-simple blogs
  • http://subsonicproject.com/ (the ORM technology is very nice, but the site design is only "not bad", and the web site starter kit leave me shrugging with a shiver)

Let's face it, the ASP.NET community is not driven by designers.

Why? Why do I ramble on about such fluffy things? Because at my current job (see the intro text) the site design is a dump of one feature hastilly slapped on after another, and although the web app has a lot of features and plenty of AJAX to empower it here and there, it is, for the most part, an ugly and disgusting piece of cow dung in the area of UIX (User Interface/eXperience). AJAX functionality is based on third party components that "magically just work" while gobs and gobs of gobblygook code on the back end attempts to wire everything together, and what AJAX is there is both rare and slow, encumbered by page bloat and server bloat. The front-end appearance is amateurish, and I'm disheartened as a web developer to work with it.

Such seems to be the makeup of way too many ASP.NET solutions that I've seen.

10. Componentize the client. Use "controls" on the client in the same way you might use .ASCX controls on the server, and in the process of doing this, implement a lifecycle and communications subsystem on the client. This is what I want to do, and again I'm thinking of coming up with a framework to pursue it to compliment Microsoft's and others' efforts. If someone else (i.e. Microsoft) beats me to it, fine. I just hope theirs is better than mine.

Why? Well if you're going to emphasize the client, you need to be able to have a manageable development workflow.

ASP.NET thrives on the workflows of quick-tagging (<asp:XxxXxx runat="server" />) and drag-and-drop, and that's all part of the equation of what makes it so popular. But that's not all ASP.NET is good for. ASP.NET's greatest strengths are two: IIS and the CLR (namely the C# language). The quality of integration of C# with IIS is incredible. You get near-native-compiled-quality code with scripted text file ease of deployment, and the deployment is native to the server (no proxying, a la Apache->Tomcat->Java, or even FastCGI->PHP). So why not utilize these other benefits as a Javascript-based view seed rather than as generating the entirety of the view.

On the competitive front, take a look at http://www.wavemaker.com/. Talk about drag-and-drop coding for smart client-side applications, driven by a rich server back-end (Java). This is some serious competition indeed.

11. RESTful URIs, not postback or Javascript inline resets of entire pages. Too many developers of AJAX-driven smart client web apps are bragging about how the user never leaves the page. This is actually not ideal.

Why? Every time the primary section of content changes, in my opinion, it should have a URI, and that should be reflected (somehow) in the browser's Address field. Even if it's going to be impossible to make the URL SEO-friendly (because there are no predictable hyperlinks that are spiderable), the user should be able to return to the same view later, without stepping through a number of steps of logging in and clicking around. This is partly the very definition of the World Wide Web: All around the world, content is reflected with a URL.

12. Glean from the others. Learn CakePHP. Build a simple symfony or Code Igniter site. Watch the Ruby On Rails screencasts and consider diving in. And have you seen Jaxer lately?!

And absolutely, without hesitation, learn jQuery, which Microsoft will be supporting from here on out in Visual Studio and ASP.NET. Discover the plug-ins and try to figure out how you can leverage them in an ASP.NET environment.

Why? Because you've lived in a box for too long. You need to get out and smell the fresh air. Look at the people as they pass you by. You are a free human being. Dare yourself to think outside the box. Innovate. Did you know that most innovations are gleaning from other people's imaginative ideas and implemenations, and reapplying them in your own world, using your own tools? Why should Ruby on Rails have a coding workflow that's better than ASP.NET? Why should PHP be a significantly more popular platform on the public web than ASP.NET, what makes it so special besides being completely free of Redmondite ties? Can you interoperate with it? Have you tried? How can the innovations of Jaxer be applied to the IIS 7 and ASP.NET scenario, what can you do to see something as earth-shattering inside this Mortian realm? How can you leverage jQuery to make your web site do things you wouldn't have dreamed of trying to do otherwise? Or at least, how can you apply it to make your web application more responsive and interactive than the typical junk you've been pumping out?

You can be a much more productive developer. The whole world is at your fingertips, you only need to pay attention to it and learn how to leverage it to your advantage.

 

And these things, I believe, are what is going to drive the Web 1.0 Morts in the direction of Web 3.0, building on the hard work of yesteryear's progress and making the most of the most powerful, flexible, stable, and comprehensive server and web development technology currently in existence--ASP.NET and Visual Studio--by breaking out of their molds and entering into the new frontier.

kick it on DotNetKicks.com

Currently rated 3.0 by 4 people

  • Currently 3/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , ,

Opinion | Web Development | ASP.NET

Seperation of American Concerns

by Jon Davis 29. September 2008 17:40

I'm going to post a non-technical and non-career related entry now and comment on the state of our nation, the United States of America. I'm only posting here now because I haven't gotten around to putting up a personal blog yet, and I'm not sure that I ever will.

I am a fellow with a lot of emotional baggage. I have a lot of opinions, a lot of hurts, and a lot of reasons to feel lonely or sad. I don't know if three months from now I'll even have a home, because my current job is a short-term contract gig. Yet, at this precise moment I'm sitting comfortably in front of a new 28" monitor that I bought for myself a few weeks ago, on a nice, equally new office chair with suede fabric, typing on a cool little Mac Mini in my home office, while three other PCs are humming in the living room--a laptop, an HTPC, and a music / gaming / development workstation. My stomach is full. There's gas in my car .. and indeed, I have a car. So, who cares if I've felt grumpy lately? Not a soul in the world; I have every reason to be satisfied in life, because today I am comfortable.

These circumstances in my personal life are a lot like America's circumstances on the whole right now. No one knows what tomorrow will bring. People have fear. But right now, there's still an economy going out there, and even if things get a lot worse than they are now, we'll still be an insanely comfortable nation. We might be shifting from a plush extremity to normalcy. This shift is true of housing prices, especially. Greedy people are facing losses. Wiser people are going to be challenged. But look at us. Who should really feel sorry for us if we lose some of our wealth? Now, granted, other countries are going to suffer from America's so-called "losses". But frankly I think the whole situation is a lot more dramatic than real.

The thing to keep in mind about this situation is that it's primarily related to the greedy home buying that went on in the beginning and middle of this decade, and the greedy banks that wanted to be in on it, too. I myself got sucked into it because there was just so much greed, greed was the popular thing, and with my rental apartment converting to a condo, I didn't want to move. I never dreamed, moving to Scottsdale, Arizona, that I'd "own" (or rather, pretend to own, having a mortgage with no equity) a home of my own that was "worth" (at mortgage value) a quarter of a million dollars. Frankly, I can't afford it, I'll never afford it. It's a second-floor apartment, not a mansion with a yard and a fence. Yet, I've refused to throw myself into foreclosure just because the real value is less than the mortgage. That's exactly the action, multiplied by tens of millions of bad citizens, that put us into this mess to begin with. I am a firm believeer in doing what's right on principle, no matter the cost, and that means if you take a risk, face your own consequences.

The world does not owe me a standard of living. I was lucky enough to have the opportunities I've had, and then I chose to take advantage of them as much as I could. But now if those opportunities are taken from me, it is selfish lunacy for me to bicker and argue and demand that the world restore the opportunity before me again, immediately. What motivates a mind to believe that the world owes us anything is only gluttony of selfishness and greed.

The role of government in our society was never meant to be a panacea. The failure/refusal of Legislation to pass the bailout bill today--a bill that would have rewarded the most greedy men in our nation with "free" money (loan, I assume)--really gave me a sense of relief today. I am not a Democrat nor a Republican. But I am offended that Republicans call themselves "conservatives" when they blow as much money as they do. And I'm equally scared of Democrats who seem to be insistent to convert our nation into a socialist state.

People compare this nation's circumstances and the potential circumstances to come with the Great Depression. I tend to think it more likely that we will go into war with another Hitler. It's just not going to happen right now. There are two really good reasons why.

First, this is not a stock market crash, this is a real estate crash. The home prices were out of control. But when the dust settles, there's something different about real estate versus stock: real property. Unlike a business that will fold once it loses funds, a home doesn't just vanish into thin air once it forecloses. Like bars of gold, a home retains most of its genuine value, simply by being. Ultimately, it will eventually balance out. As for the banks that are in the middle of this mess, they, too, were going out of control. I have no less than three credit cards in my wallet right now, when theoretically I really only need just one (or none?). The consumerism in this nation was also out of control, and in saying that I'm pointing at myself, too. Now in the area of investments, this is a GREAT time to invest in things of high retainability (real estate, commodities, etc.), I think. Things are going to get a lot "worse" the next several months, but the more things go down the more happy I am, because investment deals are looking more and more attractive. We should all be celebrating and investing, because the tide comes and goes routinely with frequency.

Second, communications, transportation, technology, and economic education have created an unbreakable mesh framework for our nation's infrastructure. The Great Depression's worst case scenarios happened because buyers and sellers could not communicate, people with supplies could not reach people in need, silence multiplied panic and pessimism, and ignorance fueled one bad business decision after another. You can tell our banks to eat their own fat. That's not going to bring down the businesses who have properly planned and are not built on stock revenue, or loans handed out to people who had no intention to swallow their own risk.

But the reason for the title of this article is because, frankly, the banks' own foolishness and greed is not something that deserves to be rewarded with hand-outs, any more than I deserve to have my mortgage dropped just so I can go on to the next cheapest home in this buy-low market. I took a risk, and I'm going to face my own consequences, not ask society to pay for my choices. And the people who walked away from their homes who wouldn't have done so if only their home value remained the same as their mortgage value do not deserve a break, either. If you buy a home, you make an investment at your own risk. Society doesn't owe you anything.

Granted, the role of government is to maintain peace and order, and a troubled economy is understandably a topic of concern for government. But to hand out money to the foolish decision makers who dug their own hole, just for them, is not going to help the problem, it's only going to reward the fools and teach them that they can be such greedy fools, and that everyone else, not them, has to pay for their excess. The bailout proposal would have left me, the home buyer who was willing to see this bad mortgage through for years to come, at a severely unfair loss, as those who simply walked away from their homes were rewarded. 

Alright, fine, the banks, not the people who walked away from their homes, would be seeking recovery; the people who walked away already did the walking, and now the banks are left in the cold. I have no pity for them. They chose their risks. Besides, it was the private banks that introduced fiat money and the income tax, to begin with. The income tax goes right back to the Federal Reserve, which is a private banking cartel including the likes of JP Morgan / Chase, to cover the cost of loan interest from congressional over-spending. This was the banks' design.

Personally if I had any say in the matter--I don't, but if I did--I'd say, let the banks ride out their own mess, and get back to work making money without loans. Quit borrowing money, and learn to save money instead.

Actually, the best thing Americans can do for the banks as well as for themselves, right now, is to start saving money in a savings account. The whole reason why we're in this mess is because of banks' lack of liquid assets. I think what a good Presidential leader should have done, instead of what ours did, is to ask the American people to start putting money into the banks today, rather than rely on taxes tomorrow, with all their overhead, as a way to balance the problem out.

Let the government focus on our safety. Let the banks suffer their own grief. I do not want to be party to a socialist society; that's just one step away from communism, and we're close enough to that as things already are.

Currently rated 5.0 by 1 people

  • Currently 5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Opinion | Taxes

Hiring The Inquisitive Mind

by Jon Davis 25. September 2008 18:40

You walk into a new contract to perform customizations to the company's proprietary web app. It's a mid-sized web app with a few hundred .aspx files and thousands of .cs files, with a detailed data layer. You're tasked to perform customizations to one of the more complex pieces of the web app, which uses third party controls and lots of code built around UI events.

The boss knows the answer to all questions, but he's only there about a quarter of the time, and when he is there he's only there for a few minutes and then he's gone for at least an hour again.

You want to be on your best behavior, but part of that is proving that you can be very productive. The person who originally implemented the code you're customizing has left the company. There is one fellow developer sitting on the other side of the cubicle. Since the boss is usually gone, you are tempted to ask questions to the other developer, but you know you're only given a few minutes / hours / days' grace to interrupt his/her workflow.

Question: How many minutes, hours or days is it appropriate to ask very trivial questions to the other developer in the boss's absence, such as "What's this object do, it's not obvious and I'm not seeing it referenced anywhere except here where it's updated", "Does the application have XX functionality that I can reference?", or "Do you know what I should do about YY behavior?"

 I asked this question in a "room" filled with fellow developers, and received very contrasting responses:

Answer 1: "Why don't you look at what the 'object' does yourself? 'Do you know what I should do about YY behavior' in my world becomes 'Where are the g**d*** SPECS?'"

Answer 2: "You're asking a really standard question for contract developers. It's really normal for a programmer that is being pushed too hard for fast development by the boss to write crappy code with little documentation and then quit from the pressure hehe. Then they hire someone else to fix the code. The question is, has the boss become more relaxed to allow the developer to do their job, or is the boss still just the boss. No doubt, to form good code, a programmer must never be rushed. But the boss is a boss first. Their duty is to productivity and revenue. There are good and bad bosses. Some bosses also know the employees that they're dealing with, but some are just delegators, not managers. You have to remember, bosses start as employees somewhere and are promoted over time. Some justly, some not so justly. They may or may not have the qualifications to be the boss. If it's possible, just map out the main functions of the program and concentrate on the areas that the boss is asking be reworked first. Try to take a little time each day to add comment summaries to each function if they arent already there. Start small. Over time you'll learn how the puzzle fits together and can address systemic problems then. More than likely the boss is watching you more to see how quickly you can analyze code and be self-sufficient. I'd limit the questions to only what is absolutely necessary with the other workers, hehe. The answer depends on the team and personalities, but [in the scenario you described,] you're too new to make that assessment. For now the answer is 0. Make a friend first."

Answer 3: "We have a mentor program here. New guys get a more seasoned person of their discipline to ask questions and get them up to speed. They also realize it takes anywhere from 3-9 months before the new person is self-sufficient."

When I was a teenager in 1994, I witnessed something that has followed me throughout the rest of my career. My father was going through a difficult period in his life, and he realized it had affected his job, so much so that he knew it was ending. In a discussion with his boss, he asked, "Did you expect me to hit the ground running?!" To which the boss replied, "Yes. I expected you to hit the ground running." This startled my dad, but he realized that his boss was right. And as he shared this experience with the rest of us, I could feel his pain. Even as I write this, I shed a tear or two for him, in memory of his anguish, because tough lessons like that are more devastatingly difficult to learn than anything else. Indeed, we as a family were all the way on the other side of the planet, in Nairobi, Kenya, on behalf of this job. And we had to turn back, and head back to the United States, because of a hard lesson learned.

I've had quite a few jobs in my career. I have sometimes (usually?) managed to show up trying to hit the ground running. And I generally succeed. I showe up being as productive as everyone else on the preexisting team within two or three days. I haven't been able to do that, though, without asking a lot of questions. But in no time flat I would turn around and have a lot of answers, too, that had been lost to the others on the team--I tried hard to make the most of my combination of industry experience and a fresh pair of eyes. Within weeks, I'd pooh pooh years-held workarounds to bugs and come up with better, faster, and more reliable solutions and share them with the team.

 

The more you know, the more you realize how little you know. (And if you don't know that then you're missing out.)

On the practical end, the above line ("The more you know..") shows itself proven when the questions being asked further the progression and involvement of development and deployment. Name any technical lead who is successful without being a great communicator, partly by the spoken word and partly by asking questions and listening to answers in order to forumulate the appropriate solution for the situation.

But I'll admit that on the flip side, asking a lot of questions up front in a new environment and situation makes a person appear unskilled. Personally, I think the opposite is the truth, but most employee folks don't realize it. An inquisitive mind is one who is smart enough to form the question--smart not only in choosing to do so and how to do so, but in knowing what questions to ask in the first place. Such is one who is also often careful not to sit around spinning wheels, which is a waste of not only that person's time but, more dangerously, the company dollar.

The mindset of the person who hates to be asked questions is usually driven by the frustration of being suddenly distracted from their work and having to face someone else's problem rather than their own. Seriously, though, how selfish. I knew a guy made me stand there and wait 5 minutes while he coded and occasionally held up a #1 finger ("one moment" gesture) just so I could get my 15 seconds out of him for an instant-answer question. How much time is lost to the question by the losing track of a chain of thoughts? I tend to think it's about double the time of the question. That said, though, if the question takes two minutes to answer and four minutes to refocus, that's six minutes lost, rather than another 5 minutes on top of it to wait, or thirty minutes lost to wheel-spinning by the asking party.

In my perspective, the most productive teams are the ones where all members are able to openly share the minds of the others, collaborate, and work together, for at least either a few short spurts during the day or for one or two dedicated hours during the day. This as opposed to coding in total heads-down isolation, with the same amount of time as otherwise spent collaborating, spent instead chatting about life and toys.

An IT / networking guy once told me, in the context of this, "What I tell the guys working for me is, 'Do what you can to try to figure out what you can for at least thirty to sixty minutes. If you still can't find an answer during that time, go to a co-worker, and if you still can't figure it out, only then should you come to me.'"

This attitude seems to be shared by many people in the technology industry; however, in the software engineering field, I am thoroughly persuaded that it is flat out wrong.

Do the math!

A) You spend two hours spinning your wheels. You finally reach what you think to be a eureka moment discovering the answer to your problem, but you soon discover that you were wrong. You spend another hour, and another after that, on false-guess workarounds to your problem. In the end, another hour is spent debugging your guesswork workarounds, but it all eventually gets functional, on your own.
Time/pay units wasted: 5 hours. And by the way, you're looking pretty sucky at this point.

B) You spend an hour spinning your wheels. You give up and talk to a permanent employee, who then gives up after fifteen minutes and so you go talk to the manager, who has the solution explained within five minutes. The co-worker loses another 20 minutes trying to get his/her mind back on track to what he/she was working on.
Time/pay units wasted: 2 hours (60 + 15x2 + 5x2 + 20 minutes).

C) You spend five minutes spinning your wheels. You give up and talk to your manager, who has the answer explained within five minutes.
Time/pay units wasted: 15 minutes (5 + 5x2).

Assuming that these scenarios are typical, only one conclusion can be made. From the perspective of the company's and tasks' best interests, there is no better person to have on your team than the one who asks a lot of questions up front. The more skilled individual will ask the more relevant question, not the lesser question. Questions that would solve the problem quickly--such as ideas and proposals in question form--rather than questions that introduce more questions.

Time spent spinning wheels is even more expensive if you're on a contract through a company like Robert Half Technology. Any company hiring through such a staffing firm is going to be paying out anywhere from 10% to 150% on top of what you're making. So when I'm working with an agency and I'm "out on the field", so to speak, as a contractor on a short-term "job" trying to accomplish some special tasks, I am absolutely frightened of spinning my wheels.

Indeed, while some managers would consider the highly inquisitive employee to be "high maintenance", supposedly "high maintenance" employees are proven to be shown to actually be the most valuable subjects of their management.

Incidentally, I don't just ask questions to fix my problems. Sometimes I ask questions to understand, so that I can be self-sufficient beyond the problem. I often ask questions to listen to the answerer's adjectives, word choice, side comments, and back stories. This is very valuable information that can really help in the understanding of the bigger picture and the way things are.

Not only do I feel free to produce questions, though, I also love to answer questions. Questions make me feel needed. They make me feel knowledgeable. They make me feel respected. No matter how extensive, complex, or brainlessly simple they are, and no matter how busy I am, I love it when people pick my brain.

The love of answering questions comes so naturally for me that I forget that other people hate it when I myself ask questions. But after so many jobs where I've been hounded for asking so many questions up front, I've reached some very important conclusions, of which I am beginning to feel strongly about.

1. I need to ask few or no technical questions from peers I don't know very well, because those whose time I am taking feel nothing like I feel when I enjoy being asked questions. Co-workers tend to hate questions like they hate bugs and spam. The manager, on the other hand, should be fully able to either answer the questions or to assign someone to answer questions. The manager's job is indeed to meet your needs. Both of you work for the same company. You don't work for the manager any more than the manager works for you. If the manager doesn't see it that way, and if the manager's refusal to give you his/her time or time with a co-worker is killing your productivity, then it's a crappy job and, if you have the option, you should look for work elsewhere. But if he/she is willing, no matter how intimidating, impose on your manager with questions, not your co-workers.
 

2. Companies and team leaders need to learn to train teams to embrace questions and teach people to enjoy answering peer questions rather than hate them. I don't know what it is that makes me so happy to be asked questions of, but I do believe it's a learnable trait. I'm also quite positive that it's something that can benefit a team and a company at large; I've only seen good come out of it, and the only negative I've seen has been flaring up of feelings and agitation. That, I believe, is correctable.

People need to be trained to be intellectually honest. This is a cultural mindset that says that:

  • It doesn't matter what you do or don't know, your knowledge about proprietary systems is a proprietary company asset that belongs to the company, not to you or to any one individual.
  • Answering questions earns you respect.
  • Admitting you don't know something wins you the opportunity to discover from the questioner's findings.
  • Being a go-to person is a role of leadership that brings you higher up the ladder (unless you refuse to answer, in which case respect goes through the floor).

3. Despite all that I've said, it's okay to put an end to too many questions. If someone is bothering you, it's fine to say, "Hey buddy, I think you're ready to be on your own now, if you have any further questions please save them for the manager." Completely okay. What's not okay is instead putting on a pouty face, making strange noises of irritation, lying through your teeth about how you don't know, and telling the boss that you're unable to get your work done because you're getting pestered. Until you've communicated a request for questions to stop (politely, ideally), that behavior is selfish.

I once had a guy sitting next to me who ignored me for one or two hours straight on a question that absolutely needed about sixty seconds of his time because he and I were the only engineers in the office and my hands were tied, before he spun around and shouted, "I don't f#$%'n know!!" This behavior is uncalled for. If I was a manager and witnessed that I'd fire such a person on the spot, with absolutely no care for what cost. That's beyond rude, it's reflective of a long-growing seed of unaddressed resentment and untrustworthiness, such being the kind of negative growth patterns that can tear a team apart and make it never function normally again until the problem is expunged. But that's just my view on that; I'm the one who got yelled at.

Why I'm Unimpressed With Rawness Of Skillz

by Jon Davis 7. August 2008 06:40

Since forever, geeks who take themselves seriously have loved to brag such things as, "I use Notepad to edit web pages". Carrying this over to actual programming, "I never click into the designer when editing my ASPX", or "I never design a database using designer tools, I always design it all using raw T-SQL," or "I always update my SVN from the command line". (Someone in a local tech user group bears the post signature, "Real men use Notepad.")

Puhleeze. I'm not impressed, and frankly I think anyone who brags like this should get a swift kick in the pants.

IMO, there are three levels of elevation to guruism:

  1. Awareness: Discovering the tech and the tools (like the WYSIWYG web editor .. "I'm a WEB MASTER, and you can, too!").
  2. Intelligence: Swearing by Notepad and proudly refusing to use the WYSIWYG editor.
  3. Wisdom: Knowing when to use the right tool at the right time in order to either save time or to produce the best output. Yes, that means being fully capable of staying away from the WYSIWYG editor or the designers, but it also means being completely, 100% unafraid of such tools if they serve the purpose of helping you write better code, more productively.

I get really turned off when co-workers smirk and look down their noses at me when I mention that I'm a tools collector, as if their refusal to use anything but the textual view of SQL Query Analyzer, the C# plain-text editor, and the command prompt somehow made them superior. The fact of the matter is, these are the people who produce output that share predictable characteristics:

  • Web pages are thrown together without thought to design.
  • Web page markup is excessive due to hit-and-miss browser testing rather than design-mode utilization.
  • Code is disorganized and messy.
  • Class libraries and databases are designed ad hoc and without thought towards the bigger, conceptual picture.
  • Databases lack indexes and referential integrity.
  • Buggy implementations take ages to be debugged due to refusal to fire up a debugger.

Yes, let's look at that last item. I don't know about you, but I am, and have always been, an F5'er. (F5 invokes the Debug mode in Visual Studio.)

Learn how to debug. With a debugger.

At a previous job, I discovered for the first time in my career what it was like to be surrounded by hard core engineering staff who refused to hit F5. Now, granted, the primary solution that was fired up in Visual Studio took literally over a minute to compile--that means F5 would require a one-minute wait for the simplest of changes if it wasn't already running in Debug mode. But even so, it's such a straightforward and clean way to get to the root of a problem that I don't see how, or why, anyone would want to do without a solid debugger to begin with?

Re-invoking code and then reading the resulting error messages is not an acceptable debugging methodology.

Instead, set breakpoints and use the introspection tools. Here's how I debug:

  1. Set a breakpoint at the top of the stack (where the code begins to execute). If using browser-side Javascript, add the line "debugger;" to the code.
  2. Hit F5.
  3. If the user (that's me at this point) needs to do something to get it to reach the breakpoint, do it.
  4. Once the breakpoint is reached use F10 (Step Over) or F11 (Step Into) to follow the execution path.
    • Always watch the value of each and every variable before proceeding to the next line of code. I can monitor variables by monitoring the Locals window, or if some method needs to execute to fetch a value or if he variable is in broad scope then I put it in the Watch window.
    • Always watch the values of each and every source property before it gets assigned to something, by hovering over it with the mouse and letting the tooltip appear that exposes its value. For example, in "x = myObject.Property;", only myObject will appear in the Locals window, and I won't see the value being assigned until it is already assigned, unless I hover over ".Property" or add it to my Watch window.
  5. If a nuisance try...catch routinely occurs such that it becomes difficult or tiresome to find where in the stack trace the exception was thrown, I might try commenting out the "try" and the "catch", or find the option in the Exceptions dialog (to find that dialog you'll have to right-click a toolbar and choose Customize and find the menu item to drag it up to the menubar and add it, as it's not there by default) that stops the debugger on all exceptions.

90% of the time, I can catch a bug by careful introspection in this manner within a couple minutes.

What the "raw skillz" folks would rather do is go backwards. Oh, it's puking on the data? Let's go to the database! Fire up SQL Query Analyzer! SELECT this! F5! SELECT that! F5! (F5 in SQL Query Analyzer, or Query view for SQL Management Studio, doesn't debug. It executes, raw. SQL doesn't have much debugging support.) Hmm, why's that data bad? Let's clean it up! UPDATE MyTable SET SomeField = CorrectValue WHERE SomeField = WrongValue ... Now, why'd this happen? Why's it still not working? I dunno!!

Oh just kill me now. That's not fixing bugs, that's fixing symptoms. If roaches ate all the pizza, this would be like replacing the pizza where it sat. Feast!!

Worse yet is when the whole system is down and the fellas are sitting there doing a code review in the effort to debug. Good lord. Shouldn't that code review come before the system went live? And, once again, F5 can and should save the day in no time at all.

Use SQL Profiler and the management code libraries.

In the SQL Server world, the closest equivalent to Visual Studio's F5 is the SQL Profiler. If you're seeing the database get corrupted and you're trying to troubleshoot and figure out why, use the Profiler. There's also the management libraries, which might provide some insight in the goings on in database transactions, from a programmatic perspective.

Ironically, shortly after I joined the team at my previous job, I introduced SMO to the database guru. Nearly two years later, after I had put in my resignation, the same fellow introduced me to SMO, apparently forgetting that I introduced it to him to begin with. But in neither case did either of us actually do much, if anything, with SMO.

SQL transactions are a tool. Use them.

There's nothing like watching a database get corrupted because of some bug, but it's despicable when it stays that way because the failure didn't get rolled back. Always build up a transaction and then commit only after doing a verification.

Don't hand-code database interop with user views. 

Let's look at ORM tools. Put simply,

  • If it saves coding and management time, it's an essential utility.
  • If it performs like mollasses, its crap.
  • If it is always dispensible, it's accepable.
  • If it gets "rusty", needs routine maintenance, or was built on a home-grown effort, it's junk.

Code generators are iffy. They're great and wonderful, if only there are enough licenses to go around and they're always working. I was recently in a team that use CodeSmith, but the home-grown templates broke with the upgrade to a recent version of CodeSmith, so everything died out. Furthermore, all of the utilization of CodeSmith revolved around a home-grown set of templates that targeted a single project, and no other templates were used. And last but not least, there were only two or three licenses, and about four or five of us. So between these three failure points, it was shocking to me when my boss got upset with me for daring to want to deviate away from CodeSmith and consider an alternate tool for ORM such as MyGeneration or SubSonic when I began working on a whole new project.

Later, I met the same frustration when LINQ arrived. Hello? It's only as non-performant as one's incapacity to learn how it ticks. And it's only as unavailable as our willingness to install .NET 3.5, and, by the way, .NET 3.5 is NOT a new runtime, like 3.0 it is just some add-on DLLs to v2.0.

Writing code should be tools-driven too.

Do basic designs before writing code. Make use of IntelliSense (for SQL, take a look at SQL Prompt). Use third party tools like Resharper, CodeRush, and Refactor! Pro. Mind you, I'm a hypocrite in this area; I tried Resharper and ran into performance and stability issues so I uninstalled it. I have yet to give the latest version a try, and same is true of the other two. But some of the most successful innovators in the industry hardly know how to function without Resharper. It doesn't speak well for them, but it does speak well for Resharper. There are lots of other similar tools out there as well.

UPDATE (8/26/2008): I've finally made the personal investment in Resharper. We'll see how well it pays off.

Don't be afraid of the ASPX designer mode.

I like to use it to validate my markup. Sometimes I accidentally miss a closing '>' or something, and the designer mode would reveal that to me much faster than if I attempted to execute the project locally. Sometimes it also helps to just be able to drag an ASP.NET control on the page and edit its attributes using the Properties window; this is purely a matter of productivity, not of competence, and fortunately the code editor supports InteliSense sufficiently enough that I could accomplish the same job without the Designer mode, it would just be a little be more work and, being manual, a bit more prone to human error.

Automate your deployments.

Speaking of human error, I have never been more impressed by the sheer recklessness of team workflow than the routine manual deployment of a codebase across a server farm. At a previous job, code pushes to production would go out sometimes once a week and sometimes every day, and each time it took about half an hour of extreme concentration by the person deploying. This person would be extremely irritable and couldn't handle converations or questions or chatter until deployment completed. Regularly, I asked, "Why hasn't this been automated yet? You can bump those thirty minutes of focus down to about one minute of auto-piloting." The response was always the same: "It's not that hard."

To this day I have no idea what on earth they were thinking, except that perhaps they were somehow proud of going raw--raw as in naked and vulnerable, such being the nature of manual labor. Going raw is stupid and dangerous. One wrong move can hurt or even destroy things (like time, sanity, and/or reputation). There's nothing to be proud of there. Thrill seekers in production environments don't belong in the workplace. Neither does insistence upon wasting time.

Design like you care.

Designers aren't just good for web layouts. I've particularly noticed how supposed SQL gurus who don't design database tables using the designer and prefer to just write the CREATE TABLE code by hand tend to leave out really important and essential design characteristics, like relational integrity (setting up foreign key constraints), or creating alternate indexes. Just because you can create a table in raw T-SQL doesn't mean you should.

The designers are essential in helping you think about the bigger picture and how everything ties together -- how things are designed. Quick and dirty CREATE TABLE code only serves one purpose, and that is to put data placeholders into place so that you can map your biz objects to the database. It doesn't do anything for RDBMS database design.

I used to use the Database Diagrams a lot, although I don't anymore simply because I hate the dialogue box that asks me if I want to add the diagram table to the schema. Even so, I'm not against using it, as it exposes an important visual representation of the referential integrity of the existing objects.

Failing that, though, lately I've been getting by with opening each table's Design view and choosing "Relationships" or "Indexes/Keys". I then use LINQ-to-SQL's database diagram designer, where inferred relationships are clearly laid out for me, assuming I'm using LINQ as an ORM in a C# project. If I see a missing relationship, I'll go back to the database definition, fix it, and then drop and re-add the objects in the LINQ-to-SQL designer diagram after refreshing the Server Explorer tree in Visual Studio.

vi is better than Notepad.

If you must edit a text file in a plain text editor, vim is better than Notepad. No clicky of the mouse or futzing with arrow keys. The learning curve is awkward, but NOTHING like Emacs so count your blessings.

I'm kidding, but the point is that there's nothing "manly" about Notepad. Of course, for the GUI-driven Windows world, better than vi or vim or anything like that, these two free Notepad replacements are pretty nice, I use both of them.

In any case, there's nothing wrong with using Notepad or some plain toolset to do a job, but only if you're using the simpler toolset out of lack of available tools. You might not want to wait for two minutes for Visual Studio to load on crummy hardware. You don't want to wait for something to compile. Whatever the limitation, it's okay.

But please, don't look down on those of us who opt for wisdom in choosing time-saver tools when appropriate, you're really not helping anybody except for your own rediculously meaningless and vain ego.

kick it on DotNetKicks.com

Currently rated 3.5 by 2 people

  • Currently 3.5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

Software Development | Opinion


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  July 2014  >>
MoTuWeThFrSaSu
30123456
78910111213
14151617181920
21222324252627
28293031123
45678910

View posts in large calendar