Design Top-Down, Implement Bottom-Up

by Jon Davis 2. September 2007 23:26

When writing software, you should always begin with a top-level, outer shell design, and then dig deeper into "designing" the implementation until you've covered all the layers of abstraction in order to meet the functional requirements. Then to implement, start at the very bottom, the back-end, the nuts and bolts, and work your way up.

This sounds rediculously obvious and annoyingly insulting as a result. But it's a practice that is often overlooked, and software success suffers as a result.

I'm guilty. I've written a few GUI applications where the first thing I did was take a Form, litter it with menus and controls, attach event handlers, and make the event handlers do something. That is the perfect opposite of what I should have been doing. For this I completely blame the precedent of Visual Basic and all the IDEs that followed. In classic Visual Basic, a "program" started out as a form in a form designer. You added code to the form. The GUI *is* the program. You don't get into the functional nitty-gritty until the form has been designed AND implemented. Bad idea.

Quality Mandates QA'ing As Part Of The Development Cycle, and QA'ing Skill

The reason why I say all this isn't because the GUI isn't important. In fact, I've found that excellent GUI programming takes as much engineering genius and mastery of the tools as the back-end engine. The problem is that when you're focused on testing GUI bugs, you shouldn't be harassed with engine bugs. The same is true, of course, vice-versa; testing engine bugs by clicking on GUI buttons is a horrible approach to doing basic software QA'ing. In fact, for back-end testing, a software QA tester's role should be one of programming, not of note sheets and administration. It really irks me that in every team I've been on (including one where I *was* the QA engineer), the testing role was to push the provided buttons, and document reported exceptions. That's not QA'ing. That's .. what is that? That's .. the company being a cheapskate for not hiring enough developers to test their crappy code? Actually, there is a fine line between a QA tester and a QA developer. The latter writes low-level API tests in code, comes up with new tests, changes out the scenarios, all in code. These seem to have been forgotten in the software industry today; I've only heard of Microsoft employing these. Yet they are extremely important, and they should be as knowledgeable of software development as the guy who wrote the software!! (I'm so sick of QA staff being CLUELESS!!) The other type might as well be some temp employee getting paid minimum wage. They read the manual as provided, they push the buttons as provided, and they watch for inconsistencies. Big whoop. That's important, but, really, big whoop? If they find stuff wrong then something either went wrong at product design, or the QA developer didn't do a very good job at seeking out new tests.

That's why designing top-down, and making said design in a simple, terse, but thoughtful manner as the key second phase of the project (the first phase being requirements gathering), is a must. The first thing that should be done is to be sure that the design requirements are correct, valid, and make sense. If there's a bug in the end QA process that isn't due to faulty code, it's due to a failure on this first step. But if one does implement bottom-up, that in turn mandates that the QA process begins with QA developers who are very proficient software developers so that they can test and know for certain that the software at its API levels is working as it should. That said, QA developers should be paid as much as, or more than, the developers who implement core code. QA development can be a painful, boring task, and an excellent one really does require as much smarts as the originating engineer.

Oh, Yeah.. Triads..

The secondary objective of designing top-down and implementing bottom-up (the primary being for quality assurance at low-level functions) is to retain the seperation of concerns. This allows each area of functionality--the GUI, the data objects, the computing tasks--to be independently fine-tuned with total focus on refining the quality of the component(s) individually. This in turn gives the GUI, along with the other parts of the app, the opportunity to be given complete and total attention when its time comes.

The new trend for enforcing seperation of concerns is invoking the MVC / MVP triad. This is a technical solution to a workflow problem, so I'm just a bit befuddled, but it is nice that the technical solution does in fact impose this workflow.

Atomic Object's Presenter First pattern seems to have a sucessful history. Presenter First seems to make excellent sense because a GUI interface on its own is worthless without underlying functionality, and the objects being managed (the blog articles, the accounting records, the game monsters, etc.) are worthless without the controlling functionality as well. This pattern works great for software, not so great for ASP.NET web applications, so I would suggest ignoring this whole pattern for web apps.

Get rid of the monolothic approach of software development where the app starts with the Windows Form. 

Start with a class library project. This class library defines your functional requirements. This is your presenter or controller library.

Add a new class library project, one that the first class library will reference. This class library defines your objects, your cars, your employees, your animals, whatever objects you're trying to build your program around.  This is your model object library. But for now, it will only consist of "dummy" classes, or mock model objects. Later, you will code up your model objects as real objects, after your presenter library has been prototyped and tested.

Add a new class library project, another one that the first class library will reference. This class library defines your user interface--a console app, a Windows app, whatever. This is your view library. But for now, it will only consist of "dummy" classes, or mock objects. Later, you will code up your view objects and test them thoroughly, as the last step.

Finally, add an executable program project (console app or Windows app) to your solution. This will reference all of these libraries and fire up (instantiate and execute) the presenter.

Each view object raises events to its presenter, and each model object raises events to its presenter. Whether there is exactly one presenter for the app to manage several view objects and several model objects, or if there are several presenters joined to a "master presenter", I'm not sure, but I've had no discovery of the latter. I have, however, come across views that were actually presenters with an associated model and view, and likewise I've come across models that were actually presenters with an associated supporting model and view.

Get Real. There Is No Silver Bullet

The obvious observations I've made is that MVP/MVC allows for the appropriate seperation of concerns and makes test-driven development and quality assurance far more feasible.

These kinds of patterns are commonly being swallowed as "the way things are done", but I find them hard to swallow universally, kind of like COM. COM met a lot of needs. But it wasn't a silver bullet and frankly it was a mess. MVP is MUCH less of a mess, but it does introduce the requirement for much more disciplined coding techniques. I don't think MVP is the universal silver bullet any more than COM was. I certainly don't think this is at all appropriate for ASP.NET, as I mentioned, and it irritates me greatly when people keep trying to apply the MVP pattern to ASP.NET pages. I think the basic principles of MVP can be had and enjoyed in practice in ASP.NET by applying simple abstraction techniques, but to call it MVP and insist on the full implementation of MVP on a web page or web site is just rediculous in my opinion.

I'm still waiting for a decent ASP.NET pattern to come around. Castle Windsor, MonoRail, Spring.NET, these things havne't gotten me inspired yet, particularly where they break the preexisting ASP.NET coding guidelines by producing ASP Classic code. There's also the issue of web servers being stateless, and navigating through a web app by way of URLs. These things make application of patterns like MVP useful only to the "snapshot" basis of a particular page, or a particular control on a page, and not very useful across a site except for headers and shopping carts where some level of session state is retained.)

Patterns aside, though, there is one pattern that is universal, and that is to design top-down and implement bottom-up.

Why? Well, If You Touch The Hot Stove, It Will Burn You

What happens if you design top-down and implement top-down?

This is like trying to teach graphic designers to write software and then utilize their first output. Do I even need to explain why this is bad? If you're building your GUI first, you're going to be building your programming logic on the GUI implementation. That's rediculously foolish. (And it's also part of the reason why applying MVP to ASP.NET web apps won't work, because the view is pre-built for you in the form of web pages and HTTP calls.) The GUI is only an interface to some computing task, and the data objects are only data containers. The computing task is the core of the functionality--not the data objects and not the GUI. You should be focusing on the computing tasks first. In an operating system, that starts with the kernel, hardware drivers, and file systems, NOT the GUI shell or screensaver.

If you design from the bottom up and implement from the bottom up (*cough* LINUX!!), which is essentially the same as letting the developers design the application, you're going to get a mess of useless features and solutions, even if well-tested, that integrate poorly and do not have appealing public interfaces.

There is actually a rather fine line between designing top-down and architecting top-down. The design includes the architecting, to the extent of public and inheritable interfaces. It includes laying out some basic software premises, even defining object member stubs. For that matter, UML could be used to design top-down. I personally like to just use Notepad.

My Case Study

I recently produced a successful mini-project at work of an object-oriented, Lucene-based search engine using design top-down, implement bottom-up. There had been some internal debate about whether a Lucene.NET-based search engine should support the abstraction of indexed objects. In my my view, any of our strongly typed objects was just a data structure to Lucene, and should be mapped as a datastructure and retrieved as a data structure. Here's what my top-down design looked like. I did it at home in my living room on my laptop, and it became the bottom-up outline for my implementation efforts:

Searchable Object Data Architecture Design (Proposed)

Listing search web page -> instantiates, populates, and invokes -> Typed listing query
Typed listing query -> exposes -> implementation-specific (Horse, Boat) fields for
                                     - value matching
                                     - range matching
                               -> other implementation-specific query rules
                     -> returns -> Listing implementation objects (Horse, Boat) 
                                   or ListingData objects (from base)
                     -> is a -> Listing query

Listing query -> exposes -> common Listing fields for
                              - value matching
                              - range matching
                         -> other common Listing query rules
              -> obtains -> resulting ListingIds (using base)
              -> invokes -> SQL server to populate Listing objects
              -> returns -> fully-populated ListingData objects
                            with fields from joined table(s)
              -> is a -> General query


Story search Web Page -> instantiates and populates -> Story query
Story query -> obtains -> all stored fields from index
            -> returns -> story objects
            -> is a -> General query
    


General query -> retains -> query schema definition 
                            (field-level type / storage / index / rangeable query metadata)
                         -> result data structure definition
              -> returns -> list of either: IDs (ListingId, story URL, etc), or data structure
              -> instantiates, populates, and invokes -> Lucene query
              -> instantiates, populates, and invokes -> SQL query

              
Lucene query -> retains -> schema definition 
                          (field-level type / storage / index / rangeable metadata)
             -> consists of -> query fields
                            -> field ranges
                            -> pagination ranges 
                            -> return data structure (named fields per result hit)
             -> invokes -> Lucene search engine
                             - by contained index (intra-process)
                             - by IPC (inter-process)
                             - by TCP (wire)
             -> sends on invoke -> itself
             -> obtains -> list of either 
                           a) specified key field (IDs), or 
                           b) set of all stored fields for document

Lucene search engine -> has a -> CLR assembly (DLL) stub (for client invocation)
                     -> has a -> Windows process stub (for server debugging)
                     -> has a -> Windows service stub (for server deployment)
                     -> retains -> configurations from web.config (client invocation) 
                                   and app.config (server invocation)
                                -> Lucene indices
                     -> accepts -> Lucene query
                     -> returns -> list of data structures 
                                   (named schema == stored fields 
                                   in each matching document in index)

I literally wrote this top-down for each block, one by one, as I thought about how to accomplish the former. After chewing on it a bit for a few days, I did some refinements, dropped a few features, changed out a few relationships, and wrote up a task prioritization list, first by reversing the order of the blocks above, and then writing (as in, with a pen) the sequential order that I would need to implement in order to accomplish the end result. I added inline tests to my code (but had not yet learned TDD so I didn't add NUnit tests or any [Test] attributes or get anything formally built for complete testing) and did not proceed to the next step up the chain of functionality until each piece worked flawlessly. And I frequently returned to the above design "spec" to keep myself on track and to avoid deviating from the ultimate objectives. I deviated a bit from the design, but only where the design was flawed, for example insteading of having a "Lucene query" send itself to the search engine, I created an abstract "QueryHandler" class, and from that I created a LuceneQueryHandler. I did the whole LuceneQueryHandler --> LuceneService connection bit using WCF, which met most of the connection variation requirement with one move. (Where it didn't meet the requirements, the requirements were dropped.)

After implementing these things (except for the SQL and Object index portions) I started calling the query objects from a web page. That in itself was a month of effort, but fortunately the effort had very little to do with querying, and everything to do with things like CSS and layout decisions.

In the end, I had produced a moderately well-tested and moderately stable search engine system, where failures in the system were not caught and debugged from the web page code-behind but in their appropriate place, i.e. the search engine server itself. I then went through and documented all of the code by enabling XML documentation warnings and filling in the missing API documentation holes, which took me about four hours, and produced a complete Sandcastle output for all of the assemblies I produced.

You'll notice that there is no conformance to MVC or MVP here. In my opinion, MVP is in some cases completely inappropriate. Software frameworks don't have to rely on such patterns to be well-built.

There are obviously lots of ways I could have approached that project better, not the least of which is TDD. But the way the mini-project was done, both the general clarity of its design from top to bottom, and its focused but complete implementation from bottom up, still has me tickled with pride. See, when I started this project I was extremely intimidated. I didn't have much experience with indexing queryable objects, except simple experience with SQL queries and XML XPath. I didn't know Lucene.NET, and all of this seemed so foreign to me. But after sitting down and thinking it through, top-down, and implementing it, bottom-up, I feel much more confident not only that the software is going to work flawlessly as designed but also that I am far more competent as a software developer than I had originally made myself out to be.

Keep It Simple, Yo!

The KISS principle (Keep It Simple, Stupid!) applies directly to the top-down design. People forgetting to keep the top-down design as simple and as straightforward as possible is the key reason why XP / Agile programming practitioners are likely scared of entering design phases. But bear in mind, simple does not mean vague, so much as it means terse. The design of each part of a solution should match the audience who will implement it. Look at the above design for the Lucene search service. I did that on my own, but on a larger project with more people involved, the following is how I probably would have gone about the top-down design. Someone with the role of Architectural Manager would be responsible for arranging these pieces to be accomplished. These designs should occur in sequential order:

  1. The web API developer would only suggest, see, verify, and sign off on the first requirement ("Listing search web page -> instantiates, populates, and invokes -> Typed listing query"), and will be counting on the business objects developer to provide the callable object(s), method, and return object(s)
    • This is as close to UI design as an API can get. It defines the true public interface. Ideally, the requester of the requirement calls the shots for the public API, but is required to be basic and consise about requirements so as to not require anything more or less than what he needs. He also gets to request the interfaces exposed by the business objects that provide him with what he needs, although it is ultimately the business objects developer's job to define that interface.
  2. The business objects developer will only see, verify, and sign off on the second level of abstraction (Typed listing query, Listing query, Story query, while understaning the public interface requirement), and will be relying on the query developer to provide the dependency code. The business objects developer gets to request the public interface exposed by the data structure query developer's solution, although it is ultimately the data structure query developer's job to define that interface.
  3. The project's data structure query developer will only see, verify, and sign off on the third level of abstraction (General query, while understanding the calling requirements), and not actually knowing anything about how Lucene work he will be relying on the Windows client/service developer to implement the Lucene query handler. The query developer gets to request the public interface exposed by the client/server developer, although it is ultimately the client/server developer's job to define that interface.
  4. The project's client/service developer will only see, verify, and sign off on the fourth level of abstraction (Lucene query handler, and a Windows service with a WCF listener). The client/service developer will request the interface exposed by the implementation of Lucene.Net, although it is ultimately the Lucene.Net implementation developer's job to define that interface
  5. A Lucene.Net implementation developer will only see, verify, and sign off on the actual Lucene.NET querying that is to be done, while understanding the notion that it is being called from within a Windows service that is being passed a General query.
  6. The client/service developer and the Lucene.Net implementation developer would collaborate and determine a design for Lucene indexing strategy (so that Lucene.Net has data to query against).

The roles of each person only seeing, verifying, and signing off on his own design is NOT a hard, enforceable rule. For example, for ongoing maintenance it can be a real hassle for a software developer in California to have to sit around and wait for a database administrator in New York to get around to adding a couple tables to a database and web site hosted in Florida so that he can get some code that calls into the database tables working successfully. The same nightmare could apply to software design. But the general understanding of who the audience is for each part of the design is what makes the design accessible to each person in the implementation process. There is no need to write a design document having engineering design requirements that are readable by management staff but are too wordy or plain-English for an engineer to be willing to pick up and read. Keeping it simple can literally mean to be free to use the terse verbiage and tools employed by the engineers' toolset, such as using psuedocode to express a requirement that would take longer to express either in plain English or in real code.

Sequence For Testability

For the implementation of this design, the responsibility would go in reverse:

  1. The Lucene.Net implementation developer would index some sample data. The data would be in the same format as one of the proposed data structures. In a testbed environment, he would test Lucene.Net itself to be sure that basic Lucene.Net queries work correctly.
  2. The Lucene.Net implementation developer and the client/service developer would work together to implement a Lucene service that could load an index.
  3. The General query developer would build a general query class. It would hold query data including condition filters and sorting requirements, and the structure of the data and the parsing of conditions would be tested and accounted for. The query would not execute (that is handled by the query handler).
  4. The Lucene.NET developer would populate instances of the general query class and perform Lucene.Net queries against it. He should debug the querying of Lucene.Net until all tests pass.
  5. The client/service developer would create a query handler that could handle the passing of the general query from the client to the Lucene.Net implementation's interface. He should debug the client/server calls using populated query objects until all tests pass.
  6. The business object developer would build a typed query builder class and a typed query resuilt class. He should test with queries and query results until all tests pass.
  7. The web API developer would build web components that call upon the business objects that facilitate querying and that render the results in HTML. He should test with queries and query results until all tests pass.
  8. The web developer / designer would build the web page, dragging and dropping the web API developer's output to the page, stylizing it with CSS. He should test design and variations on Web API output until all web design tests pass.

This might seem like a no-brainer, but it really can be easy to approach these steps backwards, on either front, but mostly on the implementation side going top-to-bottom instead of bottom-to-top. It's tempting to just throw together some HTML placeholders, then start writing some code-behind placeholders, then start building out some method stubs, then create a new class that performs a quick-and-dirty Lucene call directly from a file somewhere. It would clearly not perform well, it will likely be buggy and lack many features, and it won't be a reusable solution. Once again, I'm guilty. And in the case of this search engine service, I believe I found a much better approach to pursuing software mini-projects.

Currently rated 2.8 by 5 people

  • Currently 2.8/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

Software Development

Paying Attention To DinnerNow.net - Microsoft Killer App Best Practices Tutorial for Everything .NET 3.0

by Jon Davis 31. July 2007 18:01

I've been noticing my RSS feeds to Channel9 plugging some "DinnerNow" thing that sounded like some third party company who was just sharing some architectural trick up their sleeve, so I kinda shrugged it off. After all, how many software tricks are up people's sleeves at sourceforge.net or elsewhere?

But being a big new fan of ARCast.TV podcasts, I came across the Architecture Overview and Workflow piece which I didn't realize until I started listening that it was Part 1 of the DinnerNow bits. Listening to it now, I am realizing that this is something rather special.

DinnerNow, which I've only just downloaded and haven't actually seen yet but I'm hearing these guys talk about it in the podcast, is apparently NOT a Starter Kit style mini-solution like IBuySpy was (which was a dorky little shopping cart web site starter kit that Microsoft hacked together in ASP.NET back in the v1.1 days as a proof of concept).

Rather, DinnerNow is a full-blown software sample of an online restaurant food ordering web service application, one that I had wanted to build commercially for years (along with a hundred other ideas I had), that is top-to-bottom, soup-to-nuts, thorough and complete implementation of the entire solution from the servers, the restaurants, the buyer, demonstrating all of the awesome components of the latest long-released .NET Framework technologies, including:

  • Windows Communication Foundation (WCF) on the restaurant and web server
  • Windows Workflow Foundation (WF) on the restaurant and web server
  • Windows Presentation Foundation (WPF) for the restaurant (kiosk @ kitchen)
  • ASP.NET AJAX 1.0 and JSON-based synchronization
  • CardSpace for user security
  • PowerShell for things like administrative querying
  • Microsoft Management Console (MMC) for administrative querying, graphically
  • Windows Mobile / .NET Compact Framework
  • Windows Management Instrumentation (WMI)

Ron Jacobs (the ARCast host) made a good introduction for this with a statement I agree with, something along the lines of, "As architects, what we often try to do is look at someone else's software that was written successfully," and learn and discover new and/or best practices for software from it. In fact, that's probably the most important task in the learning process. If learning how to learn is an essential thing to learn in software architecture, then learn this essential point, that both understanding new technology and discovering best practices is learned by seeing it for yourself.

Update: Ugh. With many technologies being features comes many prerequisites. I had all of the above, but the setup requires everything to be exact. In other words, start with a fresh Virtual Machine with Vista 32-bit, and add the prerequisites. XP? Screwed. Win2003? Screwed. 64-bit? Screwed. Orcas Beta 2 installed? Screwed. And so on the screwing goes. An optional Virtual PC download from MSDN premium downloads with everything installed would have been nice. And I'm not hearing anything from the CodePlex forum / Issue Tracker, not a peep from its "maintainers". I think this project was abandoned. What bad timing, to put it on ARCast.tv now...

Another update: Microsoft did post a refresh build for Visual Studio 2008 Beta 2. I noticed it on CodePlex and they also started replying (for a change) to the multiple discussion threads and Issue Tracker posts on CodePlex. Among those replies were comments about 64-bit support--not supported, by design [which I knew] but they said with workarounds [which I posted multiple reports about] a 64-bit OS target build should be possible. I saw all this a few days ago; should have posted this follow-up update earlier, seems a Microsoft employee posted a comment here first. (Sorry.)

With passion comes passionate resentment when things go wrong. It's a fair give and take; on the other hand, whatever.


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  May 2018  >>
MoTuWeThFrSaSu
30123456
78910111213
14151617181920
21222324252627
28293031123
45678910

View posts in large calendar