Welcome to BlogEngine.NET with MSSQL provider

by Admin 30. September 2007 00:00

If you see this post it means that BlogEngine.NET is running and the SQL Server provider is configured correctly.


If you are using the ASP.NET Membership provider, you are set to use existing users. If you are using the default BlogEngine.NET XML provider, find and open the users.xml file which is located in the App_Data folder. Edit the default user and provide your own name as the username and a password of your choice. Save the users.xml file with the new username and password and you are now able to log in and start writing posts.

Write permissions

Since you are using SQL to store your posts, most information is stored there. However, if you want to store attachments or images in the blog, you will want write permissions setup on the App_Data folder.

On the web

You can find BlogEngine.NET on the official website. Here you will find tutorials, documentation, tips and tricks and much more. The ongoing development of BlogEngine.NET can be followed at CodePlex where the daily builds will be published for anyone to download.

Good luck and happy writing.

The BlogEngine.NET team

Currently rated 1.1 by 17 people

  • Currently 1.058824/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

Extending XHTML Without A DTD

by Jon Davis 23. September 2007 23:33

Until Sprinkle I never did much with extending the HTML DOM with my own tags or attributes. When XML was introduced several years ago, people tried to "explain" it by just throwing in custom tags in their HTML and saying, "This is how the new semantic web is gonna look like, see?

<books><ol><book><li>My Book</li></book><book><li>My Other Book</li></book></ol></books>

Of course, that's not the greatest example, but at any rate, from this came XHTML which basically told everyone to formalize this whole XMLization of HTML markup so that custom tags can be declared using a strict DTD extention methodology. Great idea, only instead of picking the ball up and running with it for the sake of extensibility, people instead ran the other way and enforced strictness alone. So XHTML turned out to be a strictness protocol rather than an extensibility format.

Literally, even the latest, shiniest new web browsers, except for Opera (congratulations, Opera) have trouble dealing with inline XHTML extensions. At sprinklejs.com, the following at the top of the document causes a problem:

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"[ <!ATTLIST div src CDATA #IMPLIED > <!ATTLIST div anticache CDATA #IMPLIED > <!ATTLIST div wraptag CDATA #IMPLIED > <!ATTLIST div apply CDATA #IMPLIED > <!ATTLIST input anticache CDATA #IMPLIED > <!ATTLIST input apply CDATA #IMPLIED >]>

The problem? Just go try and run that and you'll see what the problem is. The stupid web browser doesn't even speak XHTML. It sees those ATTLIST tags and thinks aww heck this must be malformatted HTML 4.01 markup, so it tries to "clean" it up in-memory by closing out the DOCTYPE before it reaches the "]>". So, when it does reach the "]>", it thinks, "Huh. Odd. What's that doing here? I haven't reached a <body> tag yet. That must be a markup error. I'll just go and 'clean' that up by moving it to the top of the body." So it gets rendered as text.

If you do a Javascript alert(document.body.innerHTML); you'll see that it became content rather than treated as an XHTML pre-parser definition. W3C validator thinks it's just hunky dory, but IE7 / FF2 / Safari 3 simply don't have a clue. (Morons.)

But heck. It handles the custom tags without the declaration just fine. These browsers don't balk at the Sprinkle script when the XHTML extensions aren't declared. And the breaking point is just extra content, right?

So I "fixed" this by simply clearing that ugly bit out. Here we go:

function dtdExtensionsCleanup() { // tested on MSIE 6 & 7, Safari 3, Firefox 2 if ((document.body.innerHTML.replace(/ /g, '').replace(/\n/g, "").substr(0, 5) == "]&gt;") ||  ( document.body.innerHTML.substr(0, 11) == "<!--ATTLIST" ||   document.body.innerHTML.substr(0, 11) == "<!--ELEMENT" )) {  var subStrStartIndex = document.body.innerHTML.indexOf("&gt;",    document.body.innerHTML.indexOf("]"));  var subStrHtml = document.body.innerHTML.substring(subStrStartIndex + 4);  document.body.innerHTML = subStrHtml; } else {  // Opera 9.23 "just works" }}

kick it on DotNetKicks.com

Currently rated 5.0 by 2 people

  • Currently 5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , , , , ,

Web Development

Crash 'N Burn (moved to XNA Creators Club)

by Jon Davis 23. September 2007 12:43

I moved my blog post regarding my embarrassing lack of preparations for my Intro to Microsoft XNA session over yonder .. http://forums.xna.com/forums/p/4913/25687.aspx#25687 

Currently rated 5.0 by 1 people

  • Currently 5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , ,

Software Development | Xbox Gaming

A Perusal of Game Engines and APIs

by Jon Davis 15. September 2007 15:58

Today at Desert Code Camp I presented a session called A Perusal of Game Engines and APIs.  This was totally just for fun (I'm not a game developer! .. but I tinker ..)

Fun as it was supposed to be, I took some vacation time to make it happen, and I got through it (and did a moderately decent job I suppose, for getting only two hours of sleep due to cramming), but not without some hair-pulling and the near-shedding of tears.

I covered mostly open-source stuff but also the obvious stuff (including commercial bits) one can find right off the Internet. Here is my PowerPoint 2007 presentation:


One minor error I made is that I forgot about XInput (which in XNA deprecates DirectInput). I also was not certain about XACT as to whether it complements DirectSound or obsoletes it outright, but I did give a nod to XACT's asset management tool.

Now if you'll all excuse me, I need to get some sleep and prepare for tomorrow's session, An Introduction to Microsoft XNA. (More cramming...  *sigh* )

Currently rated 4.8 by 4 people

  • Currently 4.75/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , , , , , ,

Pet Projects | Open Source | Software Development | Xbox Gaming | Microsoft Windows

Sprinkle Javascript library

by Jon Davis 13. September 2007 16:06
<script src="sprinkle.js"></script>
<div src="info.html"></div>

CSI (Client-Side Includes), when SSI (Server-Side Includes) is not available. You can also call it "sprinkle", as that's the name I gave the Javascript library.


kick it on DotNetKicks.com

Currently rated 5.0 by 2 people

  • Currently 5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , ,

Computers and Internet | Software Development | Web Development

ROFL .. Little Girl Got Some Nerve!

by Jon Davis 3. September 2007 15:53

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5


Design Top-Down, Implement Bottom-Up

by Jon Davis 2. September 2007 23:26

When writing software, you should always begin with a top-level, outer shell design, and then dig deeper into "designing" the implementation until you've covered all the layers of abstraction in order to meet the functional requirements. Then to implement, start at the very bottom, the back-end, the nuts and bolts, and work your way up.

This sounds rediculously obvious and annoyingly insulting as a result. But it's a practice that is often overlooked, and software success suffers as a result.

I'm guilty. I've written a few GUI applications where the first thing I did was take a Form, litter it with menus and controls, attach event handlers, and make the event handlers do something. That is the perfect opposite of what I should have been doing. For this I completely blame the precedent of Visual Basic and all the IDEs that followed. In classic Visual Basic, a "program" started out as a form in a form designer. You added code to the form. The GUI *is* the program. You don't get into the functional nitty-gritty until the form has been designed AND implemented. Bad idea.

Quality Mandates QA'ing As Part Of The Development Cycle, and QA'ing Skill

The reason why I say all this isn't because the GUI isn't important. In fact, I've found that excellent GUI programming takes as much engineering genius and mastery of the tools as the back-end engine. The problem is that when you're focused on testing GUI bugs, you shouldn't be harassed with engine bugs. The same is true, of course, vice-versa; testing engine bugs by clicking on GUI buttons is a horrible approach to doing basic software QA'ing. In fact, for back-end testing, a software QA tester's role should be one of programming, not of note sheets and administration. It really irks me that in every team I've been on (including one where I *was* the QA engineer), the testing role was to push the provided buttons, and document reported exceptions. That's not QA'ing. That's .. what is that? That's .. the company being a cheapskate for not hiring enough developers to test their crappy code? Actually, there is a fine line between a QA tester and a QA developer. The latter writes low-level API tests in code, comes up with new tests, changes out the scenarios, all in code. These seem to have been forgotten in the software industry today; I've only heard of Microsoft employing these. Yet they are extremely important, and they should be as knowledgeable of software development as the guy who wrote the software!! (I'm so sick of QA staff being CLUELESS!!) The other type might as well be some temp employee getting paid minimum wage. They read the manual as provided, they push the buttons as provided, and they watch for inconsistencies. Big whoop. That's important, but, really, big whoop? If they find stuff wrong then something either went wrong at product design, or the QA developer didn't do a very good job at seeking out new tests.

That's why designing top-down, and making said design in a simple, terse, but thoughtful manner as the key second phase of the project (the first phase being requirements gathering), is a must. The first thing that should be done is to be sure that the design requirements are correct, valid, and make sense. If there's a bug in the end QA process that isn't due to faulty code, it's due to a failure on this first step. But if one does implement bottom-up, that in turn mandates that the QA process begins with QA developers who are very proficient software developers so that they can test and know for certain that the software at its API levels is working as it should. That said, QA developers should be paid as much as, or more than, the developers who implement core code. QA development can be a painful, boring task, and an excellent one really does require as much smarts as the originating engineer.

Oh, Yeah.. Triads..

The secondary objective of designing top-down and implementing bottom-up (the primary being for quality assurance at low-level functions) is to retain the seperation of concerns. This allows each area of functionality--the GUI, the data objects, the computing tasks--to be independently fine-tuned with total focus on refining the quality of the component(s) individually. This in turn gives the GUI, along with the other parts of the app, the opportunity to be given complete and total attention when its time comes.

The new trend for enforcing seperation of concerns is invoking the MVC / MVP triad. This is a technical solution to a workflow problem, so I'm just a bit befuddled, but it is nice that the technical solution does in fact impose this workflow.

Atomic Object's Presenter First pattern seems to have a sucessful history. Presenter First seems to make excellent sense because a GUI interface on its own is worthless without underlying functionality, and the objects being managed (the blog articles, the accounting records, the game monsters, etc.) are worthless without the controlling functionality as well. This pattern works great for software, not so great for ASP.NET web applications, so I would suggest ignoring this whole pattern for web apps.

Get rid of the monolothic approach of software development where the app starts with the Windows Form. 

Start with a class library project. This class library defines your functional requirements. This is your presenter or controller library.

Add a new class library project, one that the first class library will reference. This class library defines your objects, your cars, your employees, your animals, whatever objects you're trying to build your program around.  This is your model object library. But for now, it will only consist of "dummy" classes, or mock model objects. Later, you will code up your model objects as real objects, after your presenter library has been prototyped and tested.

Add a new class library project, another one that the first class library will reference. This class library defines your user interface--a console app, a Windows app, whatever. This is your view library. But for now, it will only consist of "dummy" classes, or mock objects. Later, you will code up your view objects and test them thoroughly, as the last step.

Finally, add an executable program project (console app or Windows app) to your solution. This will reference all of these libraries and fire up (instantiate and execute) the presenter.

Each view object raises events to its presenter, and each model object raises events to its presenter. Whether there is exactly one presenter for the app to manage several view objects and several model objects, or if there are several presenters joined to a "master presenter", I'm not sure, but I've had no discovery of the latter. I have, however, come across views that were actually presenters with an associated model and view, and likewise I've come across models that were actually presenters with an associated supporting model and view.

Get Real. There Is No Silver Bullet

The obvious observations I've made is that MVP/MVC allows for the appropriate seperation of concerns and makes test-driven development and quality assurance far more feasible.

These kinds of patterns are commonly being swallowed as "the way things are done", but I find them hard to swallow universally, kind of like COM. COM met a lot of needs. But it wasn't a silver bullet and frankly it was a mess. MVP is MUCH less of a mess, but it does introduce the requirement for much more disciplined coding techniques. I don't think MVP is the universal silver bullet any more than COM was. I certainly don't think this is at all appropriate for ASP.NET, as I mentioned, and it irritates me greatly when people keep trying to apply the MVP pattern to ASP.NET pages. I think the basic principles of MVP can be had and enjoyed in practice in ASP.NET by applying simple abstraction techniques, but to call it MVP and insist on the full implementation of MVP on a web page or web site is just rediculous in my opinion.

I'm still waiting for a decent ASP.NET pattern to come around. Castle Windsor, MonoRail, Spring.NET, these things havne't gotten me inspired yet, particularly where they break the preexisting ASP.NET coding guidelines by producing ASP Classic code. There's also the issue of web servers being stateless, and navigating through a web app by way of URLs. These things make application of patterns like MVP useful only to the "snapshot" basis of a particular page, or a particular control on a page, and not very useful across a site except for headers and shopping carts where some level of session state is retained.)

Patterns aside, though, there is one pattern that is universal, and that is to design top-down and implement bottom-up.

Why? Well, If You Touch The Hot Stove, It Will Burn You

What happens if you design top-down and implement top-down?

This is like trying to teach graphic designers to write software and then utilize their first output. Do I even need to explain why this is bad? If you're building your GUI first, you're going to be building your programming logic on the GUI implementation. That's rediculously foolish. (And it's also part of the reason why applying MVP to ASP.NET web apps won't work, because the view is pre-built for you in the form of web pages and HTTP calls.) The GUI is only an interface to some computing task, and the data objects are only data containers. The computing task is the core of the functionality--not the data objects and not the GUI. You should be focusing on the computing tasks first. In an operating system, that starts with the kernel, hardware drivers, and file systems, NOT the GUI shell or screensaver.

If you design from the bottom up and implement from the bottom up (*cough* LINUX!!), which is essentially the same as letting the developers design the application, you're going to get a mess of useless features and solutions, even if well-tested, that integrate poorly and do not have appealing public interfaces.

There is actually a rather fine line between designing top-down and architecting top-down. The design includes the architecting, to the extent of public and inheritable interfaces. It includes laying out some basic software premises, even defining object member stubs. For that matter, UML could be used to design top-down. I personally like to just use Notepad.

My Case Study

I recently produced a successful mini-project at work of an object-oriented, Lucene-based search engine using design top-down, implement bottom-up. There had been some internal debate about whether a Lucene.NET-based search engine should support the abstraction of indexed objects. In my my view, any of our strongly typed objects was just a data structure to Lucene, and should be mapped as a datastructure and retrieved as a data structure. Here's what my top-down design looked like. I did it at home in my living room on my laptop, and it became the bottom-up outline for my implementation efforts:

Searchable Object Data Architecture Design (Proposed)

Listing search web page -> instantiates, populates, and invokes -> Typed listing query
Typed listing query -> exposes -> implementation-specific (Horse, Boat) fields for
                                     - value matching
                                     - range matching
                               -> other implementation-specific query rules
                     -> returns -> Listing implementation objects (Horse, Boat) 
                                   or ListingData objects (from base)
                     -> is a -> Listing query

Listing query -> exposes -> common Listing fields for
                              - value matching
                              - range matching
                         -> other common Listing query rules
              -> obtains -> resulting ListingIds (using base)
              -> invokes -> SQL server to populate Listing objects
              -> returns -> fully-populated ListingData objects
                            with fields from joined table(s)
              -> is a -> General query

Story search Web Page -> instantiates and populates -> Story query
Story query -> obtains -> all stored fields from index
            -> returns -> story objects
            -> is a -> General query

General query -> retains -> query schema definition 
                            (field-level type / storage / index / rangeable query metadata)
                         -> result data structure definition
              -> returns -> list of either: IDs (ListingId, story URL, etc), or data structure
              -> instantiates, populates, and invokes -> Lucene query
              -> instantiates, populates, and invokes -> SQL query

Lucene query -> retains -> schema definition 
                          (field-level type / storage / index / rangeable metadata)
             -> consists of -> query fields
                            -> field ranges
                            -> pagination ranges 
                            -> return data structure (named fields per result hit)
             -> invokes -> Lucene search engine
                             - by contained index (intra-process)
                             - by IPC (inter-process)
                             - by TCP (wire)
             -> sends on invoke -> itself
             -> obtains -> list of either 
                           a) specified key field (IDs), or 
                           b) set of all stored fields for document

Lucene search engine -> has a -> CLR assembly (DLL) stub (for client invocation)
                     -> has a -> Windows process stub (for server debugging)
                     -> has a -> Windows service stub (for server deployment)
                     -> retains -> configurations from web.config (client invocation) 
                                   and app.config (server invocation)
                                -> Lucene indices
                     -> accepts -> Lucene query
                     -> returns -> list of data structures 
                                   (named schema == stored fields 
                                   in each matching document in index)

I literally wrote this top-down for each block, one by one, as I thought about how to accomplish the former. After chewing on it a bit for a few days, I did some refinements, dropped a few features, changed out a few relationships, and wrote up a task prioritization list, first by reversing the order of the blocks above, and then writing (as in, with a pen) the sequential order that I would need to implement in order to accomplish the end result. I added inline tests to my code (but had not yet learned TDD so I didn't add NUnit tests or any [Test] attributes or get anything formally built for complete testing) and did not proceed to the next step up the chain of functionality until each piece worked flawlessly. And I frequently returned to the above design "spec" to keep myself on track and to avoid deviating from the ultimate objectives. I deviated a bit from the design, but only where the design was flawed, for example insteading of having a "Lucene query" send itself to the search engine, I created an abstract "QueryHandler" class, and from that I created a LuceneQueryHandler. I did the whole LuceneQueryHandler --> LuceneService connection bit using WCF, which met most of the connection variation requirement with one move. (Where it didn't meet the requirements, the requirements were dropped.)

After implementing these things (except for the SQL and Object index portions) I started calling the query objects from a web page. That in itself was a month of effort, but fortunately the effort had very little to do with querying, and everything to do with things like CSS and layout decisions.

In the end, I had produced a moderately well-tested and moderately stable search engine system, where failures in the system were not caught and debugged from the web page code-behind but in their appropriate place, i.e. the search engine server itself. I then went through and documented all of the code by enabling XML documentation warnings and filling in the missing API documentation holes, which took me about four hours, and produced a complete Sandcastle output for all of the assemblies I produced.

You'll notice that there is no conformance to MVC or MVP here. In my opinion, MVP is in some cases completely inappropriate. Software frameworks don't have to rely on such patterns to be well-built.

There are obviously lots of ways I could have approached that project better, not the least of which is TDD. But the way the mini-project was done, both the general clarity of its design from top to bottom, and its focused but complete implementation from bottom up, still has me tickled with pride. See, when I started this project I was extremely intimidated. I didn't have much experience with indexing queryable objects, except simple experience with SQL queries and XML XPath. I didn't know Lucene.NET, and all of this seemed so foreign to me. But after sitting down and thinking it through, top-down, and implementing it, bottom-up, I feel much more confident not only that the software is going to work flawlessly as designed but also that I am far more competent as a software developer than I had originally made myself out to be.

Keep It Simple, Yo!

The KISS principle (Keep It Simple, Stupid!) applies directly to the top-down design. People forgetting to keep the top-down design as simple and as straightforward as possible is the key reason why XP / Agile programming practitioners are likely scared of entering design phases. But bear in mind, simple does not mean vague, so much as it means terse. The design of each part of a solution should match the audience who will implement it. Look at the above design for the Lucene search service. I did that on my own, but on a larger project with more people involved, the following is how I probably would have gone about the top-down design. Someone with the role of Architectural Manager would be responsible for arranging these pieces to be accomplished. These designs should occur in sequential order:

  1. The web API developer would only suggest, see, verify, and sign off on the first requirement ("Listing search web page -> instantiates, populates, and invokes -> Typed listing query"), and will be counting on the business objects developer to provide the callable object(s), method, and return object(s)
    • This is as close to UI design as an API can get. It defines the true public interface. Ideally, the requester of the requirement calls the shots for the public API, but is required to be basic and consise about requirements so as to not require anything more or less than what he needs. He also gets to request the interfaces exposed by the business objects that provide him with what he needs, although it is ultimately the business objects developer's job to define that interface.
  2. The business objects developer will only see, verify, and sign off on the second level of abstraction (Typed listing query, Listing query, Story query, while understaning the public interface requirement), and will be relying on the query developer to provide the dependency code. The business objects developer gets to request the public interface exposed by the data structure query developer's solution, although it is ultimately the data structure query developer's job to define that interface.
  3. The project's data structure query developer will only see, verify, and sign off on the third level of abstraction (General query, while understanding the calling requirements), and not actually knowing anything about how Lucene work he will be relying on the Windows client/service developer to implement the Lucene query handler. The query developer gets to request the public interface exposed by the client/server developer, although it is ultimately the client/server developer's job to define that interface.
  4. The project's client/service developer will only see, verify, and sign off on the fourth level of abstraction (Lucene query handler, and a Windows service with a WCF listener). The client/service developer will request the interface exposed by the implementation of Lucene.Net, although it is ultimately the Lucene.Net implementation developer's job to define that interface
  5. A Lucene.Net implementation developer will only see, verify, and sign off on the actual Lucene.NET querying that is to be done, while understanding the notion that it is being called from within a Windows service that is being passed a General query.
  6. The client/service developer and the Lucene.Net implementation developer would collaborate and determine a design for Lucene indexing strategy (so that Lucene.Net has data to query against).

The roles of each person only seeing, verifying, and signing off on his own design is NOT a hard, enforceable rule. For example, for ongoing maintenance it can be a real hassle for a software developer in California to have to sit around and wait for a database administrator in New York to get around to adding a couple tables to a database and web site hosted in Florida so that he can get some code that calls into the database tables working successfully. The same nightmare could apply to software design. But the general understanding of who the audience is for each part of the design is what makes the design accessible to each person in the implementation process. There is no need to write a design document having engineering design requirements that are readable by management staff but are too wordy or plain-English for an engineer to be willing to pick up and read. Keeping it simple can literally mean to be free to use the terse verbiage and tools employed by the engineers' toolset, such as using psuedocode to express a requirement that would take longer to express either in plain English or in real code.

Sequence For Testability

For the implementation of this design, the responsibility would go in reverse:

  1. The Lucene.Net implementation developer would index some sample data. The data would be in the same format as one of the proposed data structures. In a testbed environment, he would test Lucene.Net itself to be sure that basic Lucene.Net queries work correctly.
  2. The Lucene.Net implementation developer and the client/service developer would work together to implement a Lucene service that could load an index.
  3. The General query developer would build a general query class. It would hold query data including condition filters and sorting requirements, and the structure of the data and the parsing of conditions would be tested and accounted for. The query would not execute (that is handled by the query handler).
  4. The Lucene.NET developer would populate instances of the general query class and perform Lucene.Net queries against it. He should debug the querying of Lucene.Net until all tests pass.
  5. The client/service developer would create a query handler that could handle the passing of the general query from the client to the Lucene.Net implementation's interface. He should debug the client/server calls using populated query objects until all tests pass.
  6. The business object developer would build a typed query builder class and a typed query resuilt class. He should test with queries and query results until all tests pass.
  7. The web API developer would build web components that call upon the business objects that facilitate querying and that render the results in HTML. He should test with queries and query results until all tests pass.
  8. The web developer / designer would build the web page, dragging and dropping the web API developer's output to the page, stylizing it with CSS. He should test design and variations on Web API output until all web design tests pass.

This might seem like a no-brainer, but it really can be easy to approach these steps backwards, on either front, but mostly on the implementation side going top-to-bottom instead of bottom-to-top. It's tempting to just throw together some HTML placeholders, then start writing some code-behind placeholders, then start building out some method stubs, then create a new class that performs a quick-and-dirty Lucene call directly from a file somewhere. It would clearly not perform well, it will likely be buggy and lack many features, and it won't be a reusable solution. Once again, I'm guilty. And in the case of this search engine service, I believe I found a much better approach to pursuing software mini-projects.

Currently rated 2.8 by 5 people

  • Currently 2.8/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

Software Development

Don't Spread Yourself Too Thin

by Jon Davis 2. September 2007 20:32

At work, in any workplace really, the leaders and supporters (including myself, in whatever role I play as a consultant or software engineer or otherwise) are notably hindered and evaluated for the value of the work on a one-to-one basis of how spread out they are. I tend to look upon myself with shame, for instance, when I see myself as a "jack of all trades, master of none", which is something I really don't want to be. I'd much rather be "knowledgeable of much, master of much", even if "much" does not come anywhere close to "all". I feel like I should be able to afford that since I am single, have no life except the life in front of my computer(s), and my sole hobby is software. Even so, I struggle to master my skills and to stay on top of current technologies.

When we take our talents and spread them too wide in the workplace, we find ourselves making excuses for not taking the time to produce quality work. Mediocre output starts getting thrown out there, and no one is expected to apologize because everyone was too busy to take the time to pursue excellence in the output.

I've started to notice that the rule of "spreading yourself thin makes you worthless" is a universal truth that doesn't just apply to people but also to software. I've been "nerding out" on alternative operating systems so that's where I'm applying this lately. The Haiku OS project is distinct from some other projects in this way. Tidbits of wisdom like this have been sticking in my head:


While we developers are used to getting into "the zone" and flinging code that (mostly) works way it should, this is not the kind of focus to which I am referring. When developing an application, the main thing that must not remain as the sole focus is the technology. Technology is the means to the goal. Should we focus on the cool stuff we can build into software, we lose sight of what it is ultimately supposed to do — just like running a marathon while staring at your feet, one tends to not see problems until one quite literally runs into them. What is the goal? The ultimate goal is to write good, fast, stable software which does exactly what it is supposed to do — and nothing else — while making it accessible to the intended type of person, and only one person of that type. 

This goes along with so many different ideologies in software architecture discussions that had me scratching my head as I had a hard time swallowing them, like, "Don't add members to your classes that are not going to meet the needs of the documented requirements, no matter how useful they seem." It took me a few days to accept that rule, because I constantly have ideas popping up in my head about how useful some method might be but I couldn't show for the requirement for it yet. But I later learned why it was an important rule. Once it gets added, it becomes a potential bug. All code is a potential bug. The bugs get QA'd on behalf of the requirements, not on behalf of the presence of the class members. Of course, TDD (test-driven development) and agile / XP practices introduce the notion of "never write code until you write a failing test", so that your new experimental method becomes integrated with the whole test case suite. But now you've just doubled to quadrupled your workload. Is it worth it? Well, it might not matter for a few simple methods, but if that becomes a pattern then eventually you have a measurable percentage of experimental code versus actual, pratical requirements implementations.

There is one thing to keep in mind about the above quote, though, and that is the fact that the above philosphy applies to applications, not to operating systems. The above-quoted article is about building user interfaces, which is an applications-level subject. Whereas, the operating system should have, at one level, a reverse philosophy: technology over function. But let me qualify that: in an operating system, the technology IS the function. I say that because an operating system IS and MUST BE a software runtime platform. Therefore, the public-facing API (which is the technology) is key. But you're still focused on the requirements of the functions. You still have a pre-determined set of public-facing API interfaces that were determined ahead of time in order to meet the needs of some specific applications functionality. Those MUST be specific, and their implementations MUST be strict. There MUST be no unsupported public interfaces. And because the OS is the runtime environment for software, the underlying technology, whether assembly code vs. C runtime API vs. Common Language Runtime vs. Java Virtual Machine, etc, is all quite relevant in an operating system. I'd say that it's only relevant at the surface tier, but it's also relevant at lower tiers because each technology is an integration point. In the Singularity operating system, for instance, which is 99.9% C# code, it's relevant down the deepest tier.

The reason why I bring up Haiku OS vs. other open-source initiatives, as if they had contrasting interests, is because they very much do. Haiku does in fact have a very specific and absolute milestone to reach, and that is compatibility with BeOS 5. It is not yet trying to "innovate", and despite any blog posts indicating opinions otherwise, that may in fact be a very good thing indeed. What happened with the Linux community is that all players are sprawled out across the whole universe of software ideas and concepts, but the end result is a huge number of instances of software projects, all of them mediocre and far too many of them being buggy and error-prone (like Samba, for instance).

More than that, Haiku, as indicated in the above quote, seems to have a pretty strict philosphy: "focus on good, solid, bug-free output of limited features, rather than throwing in every mediocre thing but the kitchen sink". Indeed, that's what Linux seems to do (throw in every mediocre thing but the kitchen sink). There's not much going on in the Haiku world. But with what little it does have, it sure does make me smile. I don't suppose some of that .. or maybe a lot of it .. has to do with the aesthetic design talent involved on the project. Linux is *functional*, and aesthetics are slapped on as an afterthought. Haiku appears at first glance as rather clean, down to its code. And clean is beautiful.

Going around adding futuristic stubs to code is something I've been guilty of in the past but I've found it to be a truly awful, horrible practice, so much so it makes me moan with disgust and terror. It makes a mess of your public-facing APIs, where you have to keep turning to documentation (and breaking tests) to discover what's implemented and what is not. And it leaves key milestone code in a perpetual state of being unable to be RPM'd (released to manufacturers). The best way to write software is to take the most basic functional requirement, implement it, test it thoroughly until it works in all predictable scenarios, add the next piece of functionality, test that piece thoroughly (while testing the first piece of functionality all over again, in an automated fashion), and so on, until all of the requirements are met. In the case of an operating system, this is the only way to build a solid, stable, RTM-ready system.

Microsoft published some beta Managed DirectX 2.0 code a while back, and I posted on their newsgroups, "Why on earth are you calling this 'beta' when the definition of 'beta', as opposed to 'alpha', is that the functionality is SUPPOSED to be there but is buggy? Only 'alpha' releases should include empty, unimplemented stubs, yet you guys throw this stuff out there calling it 'beta'. How are we supposed to test this stuff if the stubs are there but we don't know if they're implemented until we try them?" Shortly after I posted that rant, they dropped the Managed DirectX 2.0 initiative and announced that the XNA initiative was going to completely replace it.  I obviously don't think that my post was the reason why they dropped the MDX2 initiative, but I do think it started a chain of discussions inside Microsoft that made them start rethinking all of their decision-making processes all the way around (not exclusively, but perhaps including, the beta / alpha issue I raised). Even if just one of their guys saw my rant and thought, "You know, I think this whole MDX2 thing was a mistake anyway, we should be focusing on XNA," I think my little rant triggered some doubts. 

The React OS project also had me scratching my head. Instead of littering the OS with placeholders of wanted Windows XP / Vista UI and API features while the kernel is still painfully unfinished, which indeed I have observed, what the React OS team should be doing is saying, okay, we're going to set these milestones, and we're not going to add a single stub or a single line of code for the next milestone until the current milestone has been reached. Our milestones are specifically and clearly defined:

  1. Get a primitive kernel booting off a hard drive. Execute some code at startup. Master introspection of the basic hardware (BIOS, hard drives, memory, CPU, keyboard, display, PCI, PCIE, USB controllers).
    • Test, test, TEST!! Stop here!! Do not pass Go! You may not proceed until all tests PASS!!
  2. Basic Execution OS. Implement a working but basic Windows NT-like kernel, HAL, FAT16/FAT32 filesystem, a basic user-mode runtime, a basic Ethernet + IPv4 network stack, and a DOS-style command line system for controlling and testing user-mode programs.
    • This implements a basic operating system that will execute C/C++ code and allows for future development of Win32 code and applications.
    • Test, test, TEST!! Stop here!! Do not pass Go! You may not proceed until all tests PASS!! 
    • Making this milestone flawless will result in an RTM-ready operating system that can compete with classic, old-school UNIX and MS-DOS.
  3. Basic Server OS. Implement Windows 98-level featureset of Win32 API functionality (except those that were deprecated) using Windows XP as the Win32 API design and stability standard, excluding Window handles and all GUI-related features.
    • This includes threading and protected memory if those were not already reached in Milestone 1.
    • Console only! No GUI yet.
    • Add registry, users, user profiles.
    • Add a basic COM registration subsystem.
    • Add NTFS support, including ACLs.
    • Add Windows Services support.
    • Complete the IPv4 network stack, make it solid.
    • This implements a second-generation command-line operating system that will execute multi-threaded Win32 console applications.
    • Test, test, TEST!! Stop here!! Do not pass Go! You may not proceed until all tests PASS!! 
    • Making this milestone flawless will result in an RTM-ready, competing operating system to Windows Server 2000.
  4. Basic Workstation OS. Focus on Win32 GUI API and Windows XP-compatible video hardware driver support. Prep the HAL and Win32 APIs for future DirectX compatibility. Add a very lightweight GUI shell (one that does NOT try to look like Windows Explorer but that provides mouse-driven functionality to accomplish tasks).
    • This implements a lightweight compatibility layer for some Windows apps. This is a key milestone because it brings the mouse and the GUI into the context, and allows the GUI-driven public to begin testing and developing for the system.
    • Test, test, TEST!! Stop here!! Do not pass Go! You may not proceed until all tests PASS!!
    • This milestone brings the operating system to its current state (at 0.32), except that by having a stricter milestone and release schedule the operating system is now extremely stable.
  5. CLR support, Level 1
    • Execute MSIL (.NET 2.0 compatible).
    • Write new Windows services, such as a Web server, using the CLR.
    • This is huge. It adds .NET support and gives Microsoft .NET and the Mono project a run for their money. And the CLR alone gives understanding to the question of why Microsoft was originally tempted to call Windows Server 2003 "Windows Server .NET".
    • Test, test, TEST!! Stop here!! Do not pass Go! You may not proceed until all tests PASS!!
    • This introduces another level of cross-platform compatibility, and makes the operating system an alternative to Windows Server 2003 with .NET.
  6. CLR support, Level 2, and full DirectX 9 compatibility
    • Execute MSIL (.NET 3.0 compatible), including and primarily
      • "Avalon" / Windows Presentation Foundation (XAML-to-Direct3D, multimedia, etc)
      • "Indigo" / Windows Communication Foundation
      • WF (Workflow)
      • .. do we care about InfoCard?

These are some really difficult if not impossible hurdles to jump; there isn't even any real applications functionality in that list, except for APIs, a Web server, and a couple lightweight shells (console and GUI). But that's the whole point. From there, you can pretty much allow the OS to take on a life of its own (the Linux symptom). The end result, though, is a very clean and stable operating system core that has an RTM version from the get-go in some form, rather than a walking, bloated monster of a zillion features that are half- or non-implemented and that suffers a Blue Screen of Death at every turn.

PowerBlog proved to be an interesting learning opportunity for me in this regard. There are a number of things I did very wrong in that project, and a number of things I did quite right. Unfortunately, the number of things I did wrong outweighed the other, most notably:

  • Starting out with too many features I wanted to implement all at once
  • Starting implementation with a GUI rather than with the engine
  • Having no knowledge of TDD (test-driven development)
  • Implementing too many COM and Internet Explorer integration points (PowerBlog doesn't even compile on Windows Vista or on .NET v2.0)
  • Not paying close enough attention to server integration, server features (like comments and trackbacks), and Atom
  • Not getting around to implementing automatic photo insertion / upload support even though I basically understood how it needed to be done in the blog article editor side (it was nearly-implemented on the MetaWeblog API side)
  • Not enough client/server tests, nor mastery of C# to implement them quickly, nor knowledge of how to go about them in a virtualized manner
  • Too many moving parts for a single person to keep track of, including a script and template editor with code highlighting

What I did right:

  • Built the GUI with a specific vision of functionality and user experience in mind, with moderate success
  • Built the engine in a seperate, pluggable library, with an extensibility model
  • The product did work very well, on a specific machine, under specific requirements (my personal blog), and to that end for my purposes it was a really killer app that was right on par with the current version of Windows Live Writer and FAR more functional (if a bit uglier)
  • Fully functional proof of concept and opportunity to discover the full life-cycle of product development including requirements, design, implementation, marketing, and sales (to some extent, but not so much customer support), and gain experience with full-blown VB6, COM/ActiveX, C#, XML, SOAP, XML-RPC

I suppose the more I think about the intricacies of PowerBlog and how I went about them, I have more and more pride in what I accomplished but that no one will ever appreciate but me. In fact, I started out writing this blog post as, "I am SO ashamed of PowerBlog," but I deleted all that because when I think about all the really cool features I successfully implemented, from a geeky perspective, wow, PowerBlog was really a lot of fun.

That said, it did fail, and it failed because I did not focus on a few, basic, simple objectives and test them from the bottom up.

I can't stop thinking about how globally applicable these principles are at every level, both specifically in software and broadly in my career and the choices I make in my everyday life. It's no different than a flask of water. Our time, energy, can be spilled out all over the table, or carefully focused on meeting specific objectives. The latter offsets a world of refined gold from a world full of mediocrity and waste.

There is one good thing to say about spreading yourself out in the first place, though. Spreading out and doing mediocre things solely for the intent of learning the idiosyncracies of all that gets touched is key to knowing how best to develop the things that are being focused on. One can spend months testing out different patterns for a simple piece of functionality, but experiencing real-world failures and wastes of efforts on irrelevant but similar things causes a person to be more effective and productive at choosing a pattern for the specific instance and making it work. In other words, as the saying goes, a person who never fails is never going to succeed because he never tried. That applies in the broad and unfocused vs. specific and focused philosophy in the sense that just pondering the bigger scope of things, testing them out, seeing that they didn't work, and realizing that the biggest problem was one of philosophy (I didn't FOCUS!) not only makes focusing more driven, but the experience gained from "going broad" will help during the implementation of the focused output, if only in understanding the variations of scenarios of each implementation.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5


Open Source | Computers and Internet | Operating Systems | Software Development | Career | Microsoft Windows

Is anyone QA'ing Samba ??

by Jon Davis 2. September 2007 19:23

We had an old ("old" meaning installed a year ago) installation of Fedora v3 running a CMS that was publishing data to Windows Server 2003 over Samba. Occasionally, at the same time in the middle of the night that the publishing occurred, Windows Automatic Updates would download some patch for Windows and reboot the server. Obviously that was a mistake on our part to let these two actions coincide. But the bigger problem was that the Samba link didn't just drop when Windows rebooted. Instead, it locked up. So by the afternoon the next day, people are pulling their hair out trying to figure out what the @#% is wrong with this stupid CMS server, and why it just starts working again when we reboot Linux. We finally narrowed it down to a Windows server reboot--a Samba failure to drop the link.


What should happen is a timeout should occur, an error should be raised to the calling application (the CMS service), the service should halt, and then when the Windows server comes back up and the CMS service on Linux reattempts to access the path, Samba should reattempt to build the link and either succeed or fail.

That was Fedora v3. Now were' using RHEL 5, and meanwhile I'm using Ubuntu 7.04 at home on my laptop. I'm expecting a much smoother Samba experience now. But unfortunately, we cannot even seem to get our Samba links to even work, much less behave correctly (i.e. drop) when the Windows server goes down. Now I'm having all kinds of different issues.

The first issue is on my laptop at home, when I use the Network browser in Nautilus to browse my shared folders on my home machines, everything goes erratic. It sees everything really fast one minute, then it locks up for five minutes the next. I click on a folder, it becomes a file. I hit refresh, it can't "see" anything. I go up the tree a couple branches, it finally starts seeing things. I go back into the branch I was in, and it displays it pretty quickly. I copy a folder to the clipboard, and paste to /home/jon, and nothing happens.

Now it could very well be a Nautilus issue, but then here's the other problem ...

At the office, we now have RHEL 5, we have been trying to migrate off the old Fedora 3 system and onto the new system. And now this happens:


Essentially, once the Linux user writes to the Samba share, the share becomes "owned" by root and the user can't do diddly squat. This essentially breaks our publishing plans, rendering the Samba link useless.

Fortunately, Microsoft was kind enough to implement NFS support in Windows Server 2003 R2, and R2 is the build we just erected for the new environment. I'll try that next. But it still makes me wonder, what on earth happened to Samba?? It's only been around for, like, a decade!

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , ,

Open Source | Computers and Internet | Operating Systems | Linux | Microsoft Windows

Speed Up The Windows Vista Start Menu

by Jon Davis 2. September 2007 01:27

I've been installing a lot of stuff on my new laptop, and I've found that trying to simply reach the shortcut to most of my stuff in the Start menu takes about twenty to thirty seconds--about five seconds per click. This was getting really rediculous. I did a Google search and found this ..


The solution was to change the properties for the start menu (right click the Start button, choose Properties, and with the Start Menu tab selected click the Customize button) so that "Highlight newly installed programs" is deselected.

I do actually like the highlights, but a) there's no real use for it when 95% of the gajillion things I've just installed ... has just been installed, and b) why on Earth doesn't Microsoft cache and optimize this stuff? We're talking about the Start menu here! The most visible part of Windows for ALL of its users!

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

Computers and Internet | Microsoft Windows


Powered by BlogEngine.NET
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 

Tag cloud


<<  May 2018  >>

View posts in large calendar