Don't Spread Yourself Too Thin

by Jon Davis 2. September 2007 20:32

At work, in any workplace really, the leaders and supporters (including myself, in whatever role I play as a consultant or software engineer or otherwise) are notably hindered and evaluated for the value of the work on a one-to-one basis of how spread out they are. I tend to look upon myself with shame, for instance, when I see myself as a "jack of all trades, master of none", which is something I really don't want to be. I'd much rather be "knowledgeable of much, master of much", even if "much" does not come anywhere close to "all". I feel like I should be able to afford that since I am single, have no life except the life in front of my computer(s), and my sole hobby is software. Even so, I struggle to master my skills and to stay on top of current technologies.

When we take our talents and spread them too wide in the workplace, we find ourselves making excuses for not taking the time to produce quality work. Mediocre output starts getting thrown out there, and no one is expected to apologize because everyone was too busy to take the time to pursue excellence in the output.

I've started to notice that the rule of "spreading yourself thin makes you worthless" is a universal truth that doesn't just apply to people but also to software. I've been "nerding out" on alternative operating systems so that's where I'm applying this lately. The Haiku OS project is distinct from some other projects in this way. Tidbits of wisdom like this have been sticking in my head:

http://haiku-os.org/documents/dev/what_are_you_looking_at

While we developers are used to getting into "the zone" and flinging code that (mostly) works way it should, this is not the kind of focus to which I am referring. When developing an application, the main thing that must not remain as the sole focus is the technology. Technology is the means to the goal. Should we focus on the cool stuff we can build into software, we lose sight of what it is ultimately supposed to do — just like running a marathon while staring at your feet, one tends to not see problems until one quite literally runs into them. What is the goal? The ultimate goal is to write good, fast, stable software which does exactly what it is supposed to do — and nothing else — while making it accessible to the intended type of person, and only one person of that type. 

This goes along with so many different ideologies in software architecture discussions that had me scratching my head as I had a hard time swallowing them, like, "Don't add members to your classes that are not going to meet the needs of the documented requirements, no matter how useful they seem." It took me a few days to accept that rule, because I constantly have ideas popping up in my head about how useful some method might be but I couldn't show for the requirement for it yet. But I later learned why it was an important rule. Once it gets added, it becomes a potential bug. All code is a potential bug. The bugs get QA'd on behalf of the requirements, not on behalf of the presence of the class members. Of course, TDD (test-driven development) and agile / XP practices introduce the notion of "never write code until you write a failing test", so that your new experimental method becomes integrated with the whole test case suite. But now you've just doubled to quadrupled your workload. Is it worth it? Well, it might not matter for a few simple methods, but if that becomes a pattern then eventually you have a measurable percentage of experimental code versus actual, pratical requirements implementations.

There is one thing to keep in mind about the above quote, though, and that is the fact that the above philosphy applies to applications, not to operating systems. The above-quoted article is about building user interfaces, which is an applications-level subject. Whereas, the operating system should have, at one level, a reverse philosophy: technology over function. But let me qualify that: in an operating system, the technology IS the function. I say that because an operating system IS and MUST BE a software runtime platform. Therefore, the public-facing API (which is the technology) is key. But you're still focused on the requirements of the functions. You still have a pre-determined set of public-facing API interfaces that were determined ahead of time in order to meet the needs of some specific applications functionality. Those MUST be specific, and their implementations MUST be strict. There MUST be no unsupported public interfaces. And because the OS is the runtime environment for software, the underlying technology, whether assembly code vs. C runtime API vs. Common Language Runtime vs. Java Virtual Machine, etc, is all quite relevant in an operating system. I'd say that it's only relevant at the surface tier, but it's also relevant at lower tiers because each technology is an integration point. In the Singularity operating system, for instance, which is 99.9% C# code, it's relevant down the deepest tier.

The reason why I bring up Haiku OS vs. other open-source initiatives, as if they had contrasting interests, is because they very much do. Haiku does in fact have a very specific and absolute milestone to reach, and that is compatibility with BeOS 5. It is not yet trying to "innovate", and despite any blog posts indicating opinions otherwise, that may in fact be a very good thing indeed. What happened with the Linux community is that all players are sprawled out across the whole universe of software ideas and concepts, but the end result is a huge number of instances of software projects, all of them mediocre and far too many of them being buggy and error-prone (like Samba, for instance).

More than that, Haiku, as indicated in the above quote, seems to have a pretty strict philosphy: "focus on good, solid, bug-free output of limited features, rather than throwing in every mediocre thing but the kitchen sink". Indeed, that's what Linux seems to do (throw in every mediocre thing but the kitchen sink). There's not much going on in the Haiku world. But with what little it does have, it sure does make me smile. I don't suppose some of that .. or maybe a lot of it .. has to do with the aesthetic design talent involved on the project. Linux is *functional*, and aesthetics are slapped on as an afterthought. Haiku appears at first glance as rather clean, down to its code. And clean is beautiful.

Going around adding futuristic stubs to code is something I've been guilty of in the past but I've found it to be a truly awful, horrible practice, so much so it makes me moan with disgust and terror. It makes a mess of your public-facing APIs, where you have to keep turning to documentation (and breaking tests) to discover what's implemented and what is not. And it leaves key milestone code in a perpetual state of being unable to be RPM'd (released to manufacturers). The best way to write software is to take the most basic functional requirement, implement it, test it thoroughly until it works in all predictable scenarios, add the next piece of functionality, test that piece thoroughly (while testing the first piece of functionality all over again, in an automated fashion), and so on, until all of the requirements are met. In the case of an operating system, this is the only way to build a solid, stable, RTM-ready system.

Microsoft published some beta Managed DirectX 2.0 code a while back, and I posted on their newsgroups, "Why on earth are you calling this 'beta' when the definition of 'beta', as opposed to 'alpha', is that the functionality is SUPPOSED to be there but is buggy? Only 'alpha' releases should include empty, unimplemented stubs, yet you guys throw this stuff out there calling it 'beta'. How are we supposed to test this stuff if the stubs are there but we don't know if they're implemented until we try them?" Shortly after I posted that rant, they dropped the Managed DirectX 2.0 initiative and announced that the XNA initiative was going to completely replace it.  I obviously don't think that my post was the reason why they dropped the MDX2 initiative, but I do think it started a chain of discussions inside Microsoft that made them start rethinking all of their decision-making processes all the way around (not exclusively, but perhaps including, the beta / alpha issue I raised). Even if just one of their guys saw my rant and thought, "You know, I think this whole MDX2 thing was a mistake anyway, we should be focusing on XNA," I think my little rant triggered some doubts. 

The React OS project also had me scratching my head. Instead of littering the OS with placeholders of wanted Windows XP / Vista UI and API features while the kernel is still painfully unfinished, which indeed I have observed, what the React OS team should be doing is saying, okay, we're going to set these milestones, and we're not going to add a single stub or a single line of code for the next milestone until the current milestone has been reached. Our milestones are specifically and clearly defined:

  1. Get a primitive kernel booting off a hard drive. Execute some code at startup. Master introspection of the basic hardware (BIOS, hard drives, memory, CPU, keyboard, display, PCI, PCIE, USB controllers).
    • Test, test, TEST!! Stop here!! Do not pass Go! You may not proceed until all tests PASS!!
  2. Basic Execution OS. Implement a working but basic Windows NT-like kernel, HAL, FAT16/FAT32 filesystem, a basic user-mode runtime, a basic Ethernet + IPv4 network stack, and a DOS-style command line system for controlling and testing user-mode programs.
    • This implements a basic operating system that will execute C/C++ code and allows for future development of Win32 code and applications.
    • Test, test, TEST!! Stop here!! Do not pass Go! You may not proceed until all tests PASS!! 
    • Making this milestone flawless will result in an RTM-ready operating system that can compete with classic, old-school UNIX and MS-DOS.
  3. Basic Server OS. Implement Windows 98-level featureset of Win32 API functionality (except those that were deprecated) using Windows XP as the Win32 API design and stability standard, excluding Window handles and all GUI-related features.
    • This includes threading and protected memory if those were not already reached in Milestone 1.
    • Console only! No GUI yet.
    • Add registry, users, user profiles.
    • Add a basic COM registration subsystem.
    • Add NTFS support, including ACLs.
    • Add Windows Services support.
    • Complete the IPv4 network stack, make it solid.
    • This implements a second-generation command-line operating system that will execute multi-threaded Win32 console applications.
    • Test, test, TEST!! Stop here!! Do not pass Go! You may not proceed until all tests PASS!! 
    • Making this milestone flawless will result in an RTM-ready, competing operating system to Windows Server 2000.
  4. Basic Workstation OS. Focus on Win32 GUI API and Windows XP-compatible video hardware driver support. Prep the HAL and Win32 APIs for future DirectX compatibility. Add a very lightweight GUI shell (one that does NOT try to look like Windows Explorer but that provides mouse-driven functionality to accomplish tasks).
    • This implements a lightweight compatibility layer for some Windows apps. This is a key milestone because it brings the mouse and the GUI into the context, and allows the GUI-driven public to begin testing and developing for the system.
    • Test, test, TEST!! Stop here!! Do not pass Go! You may not proceed until all tests PASS!!
    • This milestone brings the operating system to its current state (at 0.32), except that by having a stricter milestone and release schedule the operating system is now extremely stable.
  5. CLR support, Level 1
    • Execute MSIL (.NET 2.0 compatible).
    • Write new Windows services, such as a Web server, using the CLR.
    • This is huge. It adds .NET support and gives Microsoft .NET and the Mono project a run for their money. And the CLR alone gives understanding to the question of why Microsoft was originally tempted to call Windows Server 2003 "Windows Server .NET".
    • Test, test, TEST!! Stop here!! Do not pass Go! You may not proceed until all tests PASS!!
    • This introduces another level of cross-platform compatibility, and makes the operating system an alternative to Windows Server 2003 with .NET.
  6. CLR support, Level 2, and full DirectX 9 compatibility
    • Execute MSIL (.NET 3.0 compatible), including and primarily
      • "Avalon" / Windows Presentation Foundation (XAML-to-Direct3D, multimedia, etc)
      • "Indigo" / Windows Communication Foundation
      • WF (Workflow)
      • .. do we care about InfoCard?

These are some really difficult if not impossible hurdles to jump; there isn't even any real applications functionality in that list, except for APIs, a Web server, and a couple lightweight shells (console and GUI). But that's the whole point. From there, you can pretty much allow the OS to take on a life of its own (the Linux symptom). The end result, though, is a very clean and stable operating system core that has an RTM version from the get-go in some form, rather than a walking, bloated monster of a zillion features that are half- or non-implemented and that suffers a Blue Screen of Death at every turn.

PowerBlog proved to be an interesting learning opportunity for me in this regard. There are a number of things I did very wrong in that project, and a number of things I did quite right. Unfortunately, the number of things I did wrong outweighed the other, most notably:

  • Starting out with too many features I wanted to implement all at once
  • Starting implementation with a GUI rather than with the engine
  • Having no knowledge of TDD (test-driven development)
  • Implementing too many COM and Internet Explorer integration points (PowerBlog doesn't even compile on Windows Vista or on .NET v2.0)
  • Not paying close enough attention to server integration, server features (like comments and trackbacks), and Atom
  • Not getting around to implementing automatic photo insertion / upload support even though I basically understood how it needed to be done in the blog article editor side (it was nearly-implemented on the MetaWeblog API side)
  • Not enough client/server tests, nor mastery of C# to implement them quickly, nor knowledge of how to go about them in a virtualized manner
  • Too many moving parts for a single person to keep track of, including a script and template editor with code highlighting

What I did right:

  • Built the GUI with a specific vision of functionality and user experience in mind, with moderate success
  • Built the engine in a seperate, pluggable library, with an extensibility model
  • The product did work very well, on a specific machine, under specific requirements (my personal blog), and to that end for my purposes it was a really killer app that was right on par with the current version of Windows Live Writer and FAR more functional (if a bit uglier)
  • Fully functional proof of concept and opportunity to discover the full life-cycle of product development including requirements, design, implementation, marketing, and sales (to some extent, but not so much customer support), and gain experience with full-blown VB6, COM/ActiveX, C#, XML, SOAP, XML-RPC

I suppose the more I think about the intricacies of PowerBlog and how I went about them, I have more and more pride in what I accomplished but that no one will ever appreciate but me. In fact, I started out writing this blog post as, "I am SO ashamed of PowerBlog," but I deleted all that because when I think about all the really cool features I successfully implemented, from a geeky perspective, wow, PowerBlog was really a lot of fun.

That said, it did fail, and it failed because I did not focus on a few, basic, simple objectives and test them from the bottom up.

I can't stop thinking about how globally applicable these principles are at every level, both specifically in software and broadly in my career and the choices I make in my everyday life. It's no different than a flask of water. Our time, energy, can be spilled out all over the table, or carefully focused on meeting specific objectives. The latter offsets a world of refined gold from a world full of mediocrity and waste.

There is one good thing to say about spreading yourself out in the first place, though. Spreading out and doing mediocre things solely for the intent of learning the idiosyncracies of all that gets touched is key to knowing how best to develop the things that are being focused on. One can spend months testing out different patterns for a simple piece of functionality, but experiencing real-world failures and wastes of efforts on irrelevant but similar things causes a person to be more effective and productive at choosing a pattern for the specific instance and making it work. In other words, as the saying goes, a person who never fails is never going to succeed because he never tried. That applies in the broad and unfocused vs. specific and focused philosophy in the sense that just pondering the bigger scope of things, testing them out, seeing that they didn't work, and realizing that the biggest problem was one of philosophy (I didn't FOCUS!) not only makes focusing more driven, but the experience gained from "going broad" will help during the implementation of the focused output, if only in understanding the variations of scenarios of each implementation.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Open Source | Computers and Internet | Operating Systems | Software Development | Career | Microsoft Windows


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  August 2020  >>
MoTuWeThFrSaSu
272829303112
3456789
10111213141516
17181920212223
24252627282930
31123456

View posts in large calendar

RecentPosts