Mutability, Linked Lists, and Enumerators (C# vs. Objective-C 2.0)

by Jon Davis 30. August 2008 22:22

A few job interviews ago, I was asked about the meaning and/or significance of the term "immutable". I knew that immutable means doesn't and cannot change, and that value types (numbers and strings) and structs in C# are immutable. I knew that structs being value types get stored on the stack, and, if small, tend to be more performant. But I didn't have the background to put two and two together, so I wasn't able to explain why it mattered.

I'm still trying to wrap my hands around how stack optimizations really work, actually. But meanwhile, another job interview asked me what I knew about data structures and, "if .NET had no arrays, how would you manage multiple objects"? I was shocked with myself. These were rediculously simple questions, going back to the basics of computer science. But, for that matter, these were among those rare questions that favor the value of college education, which I lack, over experience. But it takes neither college education nor much experience to suggest a "delimited list in a string" which I didn't even think about for some weird reason--although, technically, any such serialization requires an array of char or byte.

I could be wrong but I *think* the correct answer to "in the absence of .NET arrays, how could you manage a set of n objects" would be to create a linked list. Having been abstracted from how lists work for so long, I never even looked this simple thing up until now.

http://en.wikipedia.org/wiki/Linked_list

http://en.wikipedia.org/wiki/Stack_(data_structure)

Glad that's over. I feel so dumb (and no, darn it, I did NOT ride the short bus to school!!), but refreshing myself on that is finally behind me.

Anyway, back to the mutable vs. immutable discussion, the Cocoa API in the Mac OS X world makes a strong emphasis on isolating certain objects, most notably collection objects (NSSet, NSArray, NSDictionary) as either mutable (NSMutableSet, NSMutableArray, NSMutableDictionary) or immutable. They're completely different classes, that perform the same function but with different optimizations, so the immutable (unchanging) version, which is the default (lacking the "Mutable" in the name), is extremely performant.

C#'s APIs and compiler abstract this stuff so significantly that most C# developers (like me) would likely find all this really odd.

Here's the blog post that motivated me to post this. http://cocoadevcentral.com/articles/000065.php 

Incidentally, I noticed something else about Cocoa. In C#, the syntactically-ideal iteration loop for a collection looks like this:

foreach (string s in myArray) {
    Console.WriteLine(s);
}

The problem is that foreach typically has a performance hit versus using

for (int i=0; i<myArray.Length; i++) {
    string s = myArray[i];
    Console.WriteLine(s);
}

.. which happens to be fuglier. And, to be honest, it shouldn't be faster when working with lists because an indexer has to iterate through the list from the top to find the i node, whereas an enumerator on a linked list should require no reiteration. So I don't get it.

Meanwhile, though, C# 2.0 introduced List<string>, which offers the delegate-driven ForEach( delegate ( string s) { .. } );, which is far more performant.

C# 2.0:
myList.ForEach ( delegate (string s) {
    Console.WriteLine(s);
} );

C# 3.0:
myList.ForEach( s => Console.WriteLine(s); );

In C#, AFAIK, foreach internally just instantiates an IEnumerable object's IEnumerator object and then calls its MoveNext() method, but then so does List<T>.ForEach, so I don't know (haven't looked at the MSIL or Reflector) what it's doing different between the two.

But Objective-C 2.0 apparently went through a wholly different process of evolution, where its foreach equivalent is actually much faster than its enumerator MoveNext() equivalent.

http://developer.apple.com/documentation/Cocoa/Conceptual/ObjectiveC/Articles/chapter_8_section_1.html
http://cocoawithlove.com/2008/05/fast-enumeration-clarifications.html

Interesting stuff.

http://theocacao.com/document.page/510

OS X == Serious MVC

by Jon Davis 28. August 2008 22:39

Another quickie post, as I spend more time on my Mac Mini and tinker with Xcode, it's really starting to sink in that all this talk lately over the last few years about MVC (Model-View-Controller) is really heavily themed in the entire workflow and almost mandatory software architecture patterns in the OS X environment.

You've got Objective-C forcing you to declare your controllers and objects in a very loose fashion, managing everything with delegates and interfaces.

You have Interface Builder forcing you to build out your views with zero application logic but with hooks everywhere. (This unlike the code-behind model in Visual Studio for all views.)

Then on the data side you have Core Data, http://developer.apple.com/macosx/coredata.html, which really goes a long way to facilitating MVC relationships with the views in Interface Builder and with the general flow of the application.

In some ways, I'm reminded of the amount of effort Microsoft has gone through in Visual Studio to support data binding, except there's an incredibly beautiful simplicity yet sophistication about Apple's approach, where you don't just data bind to inject data into a view, as in dumping data, rather you provide the data models and the controllers to handle the delegates and the actions of the view. The implications of these differences can be staggering. And I know that Visual Studio/.NET facilitate delegation for these things (raising events), but the approach is just not as MVC-oriented. Visual Studio doesn't enforce MVC, Xcode/IB does. Visual Studio allows for MVC, that is, but Xcode/IB imposes it--or at least makes it really obvious that it's the best way to work in Xcode.

Which one is better? Neither; Visual Studio offers versatility whereas Xcode/IB offers elegance and predictability. I love them both.

Objective-C ~ C# 3.0

by Jon Davis 26. August 2008 10:50

Just a quickie note, as a side interest (for fun) I'm trying to get back into Cocoa development (still in newbie stage) and I'm watching the iPhone introductory videos. There's a part of one of the videos related to Cocoa Touch that had me reminded of C# 3.0.

  1. Objective-C uses delegates as an alternative to method overrides. I don't recall, outside of callbacks / function pointers, whether C/C++ supported "delegates" in the first place. But the approach taken by Cocoa here really surprised me, how significantly it looked like my Evented Methods pattern [ 1, 2, .. ] I proposed a few days ago, since technically events are just anonymized delegate invocations. Kinda gave me goose bumps, seeing evidence that I was on the right track.
    • On a similar vein, @selector returns a SEL, which is the type name for a selector but the value is just a const char* that is the method name. Selectors are good for invoking methods you don't know the name of until runtime.
  2. Objective-C has something called "categories", which I feel is a terrible name, but given the description given by the presenter in the video it sounds like Objective-C's categories are functionally the same as C# 3.0's extension methods, although not in form. So, Objective-C supports extension methods, they're just called "Categories", which, again, is a really weird name.
  3. It's easy to take C# constructors for granted. In Objective-C, the equivalent functionality is the far less elegant invocation of an "init" method (an initializer).
  4. Properties, as being different from methods, functions, and fields, are assumed in Objective-C, as they are in C#. The C# 3.0 compiler supporting the syntax of myType myProp { get; set; } as an inferred implementation of myType _myProp = default(myType); myType myProp { get { return _myProp; } set { _myProp = value; } } is actually not new to C# 3.0, either. While the form is different, Objective-C has something similar. It has a @synthesize keyword that generates the implementation code for the getter and the setter.
Note: this blog entry might (or might not) grow.

Currently rated 1.0 by 1 people

  • Currently 1/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , ,

Software Development | Mac OS X

YAGNI! .. Or are you?!

by Jon Davis 25. August 2008 05:41

SD Times recently published survey results of the question, "What practicies do you consider most essential during software construction?", with these top 5 rankings (answers are ranked in levels of importance on a scale of 1 to 5):

  1. Emphasis on functioning code (YAGNI) (4.18)
  2. Unit-testing (4.14)
  3. Continuous integration (4.13)
  4. Delivery cycle 2 months or less (3.91)
  5. Emphasis on code aesthetics (3.72)

Surprisingly, code aesthetics beat out code reviews, dedicated QA staff, even language choice. 

One philosphy I have a problem with is YAGNI ("You Ain't Gonna Need It!"), which is basically the rule that if it's not required, don't add it. But now let me clarify my frustration. It isn't with the principle of not adding code you don't need. I agree with the principle of avoiding unneeded bloat and weight and garbage code in a codebase by never implementing things you don't need.

I got inspired to post this blog entry after reading someone's blog entry and its comments that I agree with on the topic, to detail:
http://blog.qualityaspect.com/2006/06/10/the-semi-myth-of-yagni/

My issue is with the verbiage of the acronym and what brainwashing it has done to people. It's a terribly broad and unqualified statement, and it's also unacceptably presumptuous. It shows that YAGNI is apparently meant only for actual software construction with tight, documented requirements, not continuous development and engineering such as maintaining and growing a web site or an IT system. Otherwise, just because an idea pops into a developer's head doesn't mean you're not going to need it. My own code tends to get a little bit large but that's because I evaluate, reevaluate, and reevaluate again what's required to make an implementation bug-free. I test my own code and expand as needed. More often than not, bugs are the absence of needed internal features that were not included during the design phase. Technically, every line of code is a seperate "feature" of a routine. For example, invoking a clean-up routine is a "feaure", but don't tell me it wasn't needed just because we forgot to consider it in the requirements.

Usually the explanation of YAGNI goes something like, "If you came up with a cool method or routine or event or event handler that you want to put on the object, but it wasn't a part of the requirements, do not add it, period! Even if you have plans to use it, unless it's a part of the requirements, do not add it!" I agree with it to an extent; extensive unnecessary code is and results in waste -- waste of time, energy, and, in the long run, money. The problem is that the requirements do not always consider the scope of the implementation. Sometimes the implementation introduces additional requirements, or additional requirements get introduced during the discussion of implementation. For example, "Send this object over the wire to that service and have it send this other object back." Is there only one class of this object? Do you want exactly 2 class forms / interfaces / contracts to be sent and received? Or, do you want to package the class into a generic data transfer contract instead?

And sometimes features are desirable for the sakes of reusability and flexability. A lot of my concern boils down to two architecture-affecting developers (i.e. myself and another) not agreeing on what they're going to use a software component for. My argument is that the objects being sent over the wire should be generic DTO contracts of dictionary-like structures, sent using WCF, rather than strongly typed class serializations over Remoting. The other individual argued that their use is the only use intended for this component, and that my intentions of reusing it on other things are beyond the scope of development. "You're designing for the unplanned future." But I had big plans, indeed. In the end, my broad-based reusable scenario was the one that was implemented, and it was implemented successfully and deployed successfully, and the scenario-specific implementation never even happened, primarily due to the other person's protest of the whole situation, insisting YAGNI. And, yes, my codebase was large. But it proved worth it when it was deployed all over the place for many projects. We were one management decision short of open-sourcing the client-server component as a one-of-a-kind solution for the .NET community, in which case it could have been used not just on several of our projects but all over the world, too. (There was one lingering bug we had a hard time tracking down but it had little to nothing to do with my genericization--and lots to do with the other individual's refusal to jump in and be a part of the component's upkeep while I was on it.)

In the short term, spreading out beyond the initial requirements slowed us all down. But in the long run, we had one reusable codebase to support them all, which was much easier to manage across multiple projects than strongly typed transfer over Remoting. Had we coded for the specific requirements, imagine strongly typed transfer code, reproduced over and over again each time a new set of requirements came, each time with a different coding style, each time with a different set of potential bugs. For that matter, we wouldn't have implemented the feature at in other solutions at all because we had nothing to reuse except proof of practice. Agile would prove to not be agile enough. Needless to say, I'm fast losing interest in Agile methodology and philosophy, as I'm a huge believer in making large short-term investments for long-term futures. True agility to me is in building tools and reusable components. But software built as such is not easily billable; Agile developers want to account for their every effort each time. Their philosophy seems to be to waste no time but lose no billable hour to prebuilt work.

Now then, someone blogged about the YAGNI scenario of one developer wanting to use BizTalk to fetch a file for processing, while another wanted to use WebClient. The latter won the YAGNI argument, I guess. But my questions might be, "Is BizTalk already used in the solution? Does it already perform similar fetches for processing?" I'm obviously appreciative of going lightweight, in general. This is an extreme scenario, you can't get much lighter weight than WebClient, and in a solution that doesn't have BizTalk already I'd say just use WebClient, too. But if there are business protocols that can kept in check by BizTalk then don't throw consideration of BizTalk out, and certainly be careful about using multiple mechanisms for the same functionality.

These things said, I'll nod in agreement that if a new feature is literally a new feature, and not a dependency feature in order to round out the completeness of the implementation of a broad requirement, don't implement it in the production codebase; it's a prototype. Treat it as prototype code--work on it in your spare time, and put it in a seperate project that has the word "prototype" in the project name. If you really love the idea, present it to the manager / designer / architect to see if it should be added; if not, shelf it, maybe it will come in handy later. But don't put it in the production codebase.

I think the better philosphies over YAGNI are to Be Intentional and Plan For Reuse. Everything should have a reason and a purpose. If you're going to add code, be prepared to argue for it, test it, document it, or whatever else needs doing, and ask yourself, "Is it worth it? Or should I find a cheaper workaround?" Planning for reuse, meanwhile, means that lightweight adjustments and additions are called for if a component can be reused elsewhere but still pass all current requirements, as long as the time and effort to reuse the component is extremely simple from that point on, and you have absolute cases for reuse (you're not hoping something might come up later).

In my opinion, many developers especially at the senior level are also decision-makers at the finer levels of detail, and as decision makers they must often make such decisions as, "We actually do need this new feature because I actually will be needing to use it next month and I really won't have time to implement it at that time." Now if that gets backed up with just saying, "I'm in the zone in this module right now and I don't want to lose what I have in mind," I might acknowledge it might be best to call it prototype code and to export and remove it from production code after some satisfying implementation and testing are done. However, the point I am trying to make is that sometimes the developer is right, and the requirements fell short. I suppose in a cranky corporate environment the developer would need to seek permission for the feature to be added by way of adding the feature to the requirements. But if the interest is in getting the job done quickly with minimal overhead and bloat, I'd suggest to just sneak it in and apologize and explain yourself later. Again, be intentional, and be able to back up your actions with reason. The saying, "Rules are meant to be broken," fits like a glove with YAGNI. (But you'd better be right about your violations, and have authority to do the violating.)

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Software Development

Amazon EBS Introduced: Now Suddenly I Care Again About Amazon EC2

by Jon Davis 21. August 2008 10:28

I shrugged off Amazon EC2 a couple years or so ago when they made it available, after spending a couple days trying to play with it and see how it ticks. My reason for shrugging it off? If a virtual machine crashes, everything gets destroyed, all custom apps all data, everything. Best you can do is build a virtual machine that has everything preinstalled on it and then uses something like Amazon S3 to load and back up the data store. The problem with that is that it requires plumbing and engineering, as well as 100% planned downtime (no crashes or other unexpected reboots allowed, lest you lose everything since the last backup).

Now Amazon introduces this:

http://www.amazon.com/gp/browse.html/ref=pe_2170_10160930?node=689343011

With this, suddenly I'm interested again because now file storage persists in an EC2 instance when used in conjunction with EBS. I knew this day would come -- if not from Amazon then from Aptana's Cloud or from Microsoft's Mesh.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , ,

Computers and Internet

IoC By Way Of Simple Event Raising: Part 2 - Contained Interfaces and Evented Methods

by Jon Davis 18. August 2008 18:21

This post is Part Two of on the topic of "IoC By Way Of Simple Event Raising".

Previous parts:

 This post has an associated source code package: IoCEvents_Part2.zip

In Part One of this discussion, I argued that people have become so dependent upon IoC frameworks that they have forgotten the clean, simple, and very versatile IoC framework already bundled in C# in the forms of interfaces and of C#'s eventing subsystem. I ended the article indicating that I felt it was necessary to prove out the testability of a system that uses plain and simple eventing as an IoC methodology.

That proof of testability will have to wait until Part Three (or later). In this part I want to prove out a somewhat useful prototype of real-world dependency injection using eventing. The scenario I chose is the classic data model / data provider scenario:

  • A data model needs to persist itself,
  • A data provider performs the persisting, independent from the model itself

And,

  • A data persistence provider makes itself available to persist the data model by subscribing to the model's events.
  • The provider is one among multiple providers that each can handle the data model by subscribing to the model's event interface definitions.

In .NET, an abstract, hot-swappable data provider scenario is already supported with ADO.NET / System.Data. I'm not trying to reinvent the wheel here, rather I'm just trying to demonstrate a prototype for the purposes of discussion.

Disclaimer

I'm prototyping a lot of ideas that are original to me, but not original to computer science. So terms, verbiage, diagrams, et al, might be a little bit different than is taught in college or trained in enterprise shops. I'm also still introducing myself to some of this stuff, being for example untrained in Castle Windsor and other IoC containers (due to my distaste), so I might be off on some things. But in such cases, I would greatly appreciate constructive correction, and if my commenting system below this article doesn't work, you can e-mail me at: jon -AT- jondavis.net.

Provider Dependent Interfaces, IoC Subsystems, Contained Interfaces, and Evented Methods

Provider Dependent Interfaces

Classically, a provider would provide all of the CRUD operation interfaces on behalf of a data model, and there would be a one-to-one relationship between the application and the provider and between the provider and the model. Switching out one provider for another would result in a potentially different type of model altogether--which is obviously a matter of concern. And to perform actions against the model, the application would have to go straight to the provider to manage that detail.

Therein lies one of the problems. The application code has to know too much to manage the data, and usually, the model would be completely clueless about how to persist itself back to its origin and would be completely dependent upon the application logic to keep track of the provider. The provider and/or the application would own everything related to how to manage data persistence.

Another direction one might take is to provide provider hook interfaces within the model that allows the model to "speak to" the provider whereby:

  1. a global provider exists,
  2. a rich suite of data interop code already flushes out most of these scenarios, or
  3. the model retains a common API reference back to the provider (2-way interfaces), which I'll discuss later.

In the first scenario, the model might expose a "Save()" method, which would invoke a global utility library to persist itself back to the database. This scenario is obviously very limiting, in the same way that global-anything in general is frowned upon. What if, for example, you had two disperate data sources? What if they persist in completely incompatible ways?

In the second scenario, Visual Studio (for instance) has a rich set of tools and controls whereby you can hot-swap one set of ADO.NET resources for another. These tools support not only SQL SELECTs but all of the CRUD operations associated with basic data access and manipulation. The problem then becomes that the application logic gets built upon this persistence framework that is ADO.NET and its support controls, and the developer has to constantly think in ADO.NET terms, which tend to get quite messy when real-world business and domain logic starts to surround the data models, rather than using small, discrete, and simple business objects that expose a persistence pattern while being essentially C# objects, inheriting from anything, not DataRows or DataReaders. Further, Visual Studio-style data binding in general encourages a one-to-one relationship between views and data models, which is not often appropriate. Fortunately, Visual Studio does go a long way to support data binding to objects, but it is rarely used, or used correctly, from what I have seen. While we're sidetracked on data APIs, these things said, Microsoft is going a long way to make the second scenario fully work in most scenarios, these seen in LINQ and in the new Entities initiative. There are other very extensive libraries such as NHibernate or EntitySpaces that manage this stuff well. I am not about to discuss the merits of these; they are very useful. My objective is not to look at data APIs but rather model dependencies in general C# terms.

IoC Subsystems

What IoC container libraries typically do in this scenario is to provide an abstraction interface that invokes the provider functionality on behalf of the model in a mediator role. It acts as a "man in the middle". In that way, the application code can be clueless but trusting about how the operation is accomplished, the mappings can be swapped out in an isolated module from the logic, and the model can make assertations and take actions in the context of its provider, without knowing who its provider is.

The benefits of these IoC containers are clear. You have one set of interfaces to manage data mangling, and that set of interfaces is essentially a proxy. The proxy acts as a mapping service that can be rigged and re-rigged for isolating components, and as such also transparently support object mocking.

What I argued in my previous entry was that there are serious issues I have with these IoC container systems:

  1. They require a dependency upon the IoC container library (Castle Windsor, et al). This introduces more moving parts and more areas where the application can be prone to failure. It's not only that it's a dependency upon a third party software solution that may or may not have bugs, it's that the maintainers of the system have something else to worry about:
    • Potential fail points
    • Learning curves
    • Extra configuration
    • The oxymoron that is the dependency upon the dependency injection mechanism
  2. It's a massive new learning curve for those maintaining the application. When working with business objects, the last thing a developer should have to worry about is the support infrastructure, and the whole idea of all of the objects being tailored aound the IoC container API suggests that the whole thing just got a lot messier and kludgier, rather than cleaner.
  3. No two IoC container subsystems are anywhere near like each other, so switching from one system to another may require a huge refactoring project.
Contained Interfaces

Going back to classic IoC, another scenario is where the model, being lightweight, retains a lightweight reference to the provider where it came from. I'm not really talking about data classes like ADO.NET's DataReader, which has to have a reference to its provider in order to support things like Read(), rather I'm talking about POCO (Plain Old C# Objects), your domain objects, that implement an interface that exposes a standard interface reference to a provider that can handle all of its CRUD operations. So, when, for example, a Save() method gets invoked, it gets invoked directly on the model, and the model then turns around and tells its referenced provider to do its job on its behalf.

As long as the interface is established and is standardized, it doesn't matter what implementation the interface actually uses. A controller can inject the implementation of the provider directly to the model as soon as it is appropriate.

In this way, interfaces themselves facilitate the core bread and butter for IoC. (Not so with some of the IoC containers previously described, as they often tend to use generics and delegate placeholders to wrap everything in actual implementation code.) There's nothing new here; this has been the premise of software since C++ and COM for many, many years. But it comes really close to what I'm trying to argue here. We've been building up a whole new breed of developers who want to fix what ain't broke.

To be honest, I feel torn at this point between the basic contained interfaces scenario versus the events-based IoC I'm proposing, but there are pros and cons of each, and of both together, which I'll get into in a moment.

Evented Methods 

I'd like to play with terms here and suggest a pattern name: Evented Method. I haven't seen this pattern demonstrated much before, so that's why I feel like I'm suggesting something many people haven't considered much. Pehaps there is a reason why this pattern isn't used much, but I'm just not sure what the reason is, which is why I'm blogging it, maybe someone can tell me, it seems WAY too versatile to overlook.

The idea of Evented Methods is basically forwarding the invocation of a method to an event so that the method can be handled by an unknown outside provider, always.

In the context of a data model, for the purposes of demonstration, I would like to call a data model that implements the Evented Method model as an Evented Record. An Evented Record then is a variation on the Active Record pattern, which might infer:

  • "new" constructor creates a new record or, with constructor parameters, loads an existing record into a POCO data model object 
  • properties exposed on the object are assumed to be data fields
  • the data model can be loaded or reloaded from its data source 
  • a "save" function should persist the state of the object (insert or update)
  • a "delete" function should remove the record from the database or from the data file

But instead of implementing these functions, the data model only exposes them as interfaces and keeps them that way. There is no implementation of "save" with the data model, yet the data model itself can be called upon by the app layer, where the model's Save() method is treated by the application logic as, "You there, save yourself!!" The model does this by having delegate references to the provider interfaces diretly in the object. So if "Save" is called upon from the app to the model, the model will simply pass the request along and say, "Hello out there? Can someone please do this saving action for me?" The provider, which should already be subscribed to the object, "sees" the event and handles it as if it was called directly.

This is a variation on Contained Interfaces I previously described, where the model has a reference to the interface of the provider and invokes it. The difference here is the form of the interface bindings, using event subscriptions (events pointing to delegates, which then point to providers) rather than actual interface references (properties/fields pointing to providers).

Note: This Evented Method pattern is actually an implementation subset of the delegation pattern.

There are several reasons why one might favor the event-based interfacing over the Contained Interfaces I previously described. Among them,

  • Raising an event can be handled by multiple subscribers. This can be useful for things like logging and auditing.
  • Events facilitate an assignment syntax that enables plug-in-like support without the "stepping on toes" of property assignment whereby a prior assignment might have already been made.
  • The isolated nature of exposing interfaces via events rather than implementation properties gives application developers and modelers greater ability to focus less on the provider interfaces and more on the domain code.
  • Some might argue that the model should only know about itself and know very little or nothing about its providers, not even its interfaces.
  • The nature of the behavior is out-bound, arguably inferring a "needs of" or a "happening to" the model rather than an incoming "thou shalt" command from the model.
  • Event handlers are invisible to the outside world -- only the event raiser can see the event subscribers. 

On the other hand, there are also arguments one could make in favor of simple contained interfaces.

  • A simple but complete contained interface can provide predictability of the nature of the model's providers to the model.
  • A strict one-to-one relationship (not one-to-potentially-many) between the model and the provider can be enforced.
  • The invocation relationship between the model and its provider is much more trivial.

Like I said, I am torn. I am not, however, sold on IoC container libraries yet, though, because, as I am about to demonstrate, there isn't much need for them except perhaps in extremely complex scenarios.

The Biggest Problem With Contained Interfaces and Evented Methods I've Found So Far ...

There is one serious caveat with both contained interfaces and evented methods, and that is that the data models must be properly disposed of. In the disposing of these objects, the references to the providers must be dropped. Failure to do this can result in the failure of the model and/or the provider to be destroyed, which can result in memory leaks, in providers running out of resources, or, in some rare cases, data corruption or application faults due to delayed finalization of the objects.

If the developer simply implements and invokes IDisposable where appropriate, things should behave correctly.

Evented Records Demonstrated

The code sample I provided at the top of this article is something I produced entirely for the sake of this discussion and took me about a day to produce.

The EventedRecord project is a provider-agnostic data persistence prototype library that illustrates the use of evented methods as a form of contained interfaces whereby inversion of control (not yet fully demonstrated) can be applied. It demonstrates:

  • The use of an interface (IEventedRecord) to define how events are used to delegate (v) the CRUD operations to the provider(s)
    • HandleXXX events (HandleSave, HandleLoad, HandleDelete)
      • Always returns an array of EventedResult (one per each subscriber to the event)
    • Save(), Load(), and Delete() interfaces for application code to be able to invoke
  • An example implementation abstract base that
    • Uses a dictionary object to maintain data that
      • maintains dirtiness state, and
      • raises events before and after changes to properties
    • Raises events before and after saving
    • Can reset itself to conform to the original data that was originally loaded into it before changes began
    • Hides the ugly and, to application logic, irrelevant "HandleXXX" events from its IntelliSense members list
    • Invokes the event subscribers one at a time, generating a list of results. 
    • In the event of a failure (Exception), the list of results so far are compiled and passed into the resulting custom exception that is thrown on behalf of the application.
  • A basic data provider interface or handling data objects by way of events.
    • Nothing to it, just:
       
        void BindAsHandler(IEventedRecord record);
       
    • The expectation is that it would subscribe to all the IEventedRecord object's "HandleXXX" events that it can handle.

This diagram demonstrates the initial IEventedRecord interface definition as provided in the source code at the top of this article, and why it is called what it is called.

* I put a yellow asterisk on GetValues() to mention that it is one potential way for the handler to get to the data fields (name and value collection). It should return a Dictionary<string, object>. 

The console project code demonstrates how it is used:

static void Main(string[] args)
        {
            MemoryDataProvider memProvider = new MemoryDataProvider();
            Employee emp = new Employee(memProvider);

            // .. or in the desire to make Employee clueless about the provider,
            // I could just as well have gone with ..
           
emp.Dispose();

            emp = new Employee();
            memProvider.BindAsHandler(emp); // provider-controlled binding


            emp.BeforeSave += new EventHandler(delegate(object sender, EventArgs e)
            {
                Console.WriteLine("Saving employee: " + ((Employee)sender).Name);
            });
            emp.AfterSave += new EventHandler(delegate(object sender, EventArgs e)
            {
                Console.WriteLine(((Employee)sender).Name + " has been saved!");
            });

            emp.Name = "Jon";
            emp.Title = "Programmer";
            emp.Bio = "Loser.";
            emp.Phone = "999-999-9999";
            emp.Ext = 101;
            emp.Save();
            int newid = emp.Id;

            emp.Dispose();

            emp = new Employee(memProvider, new NameValuePair("Id", newid));
            Console.WriteLine(emp.Name + " has been reloaded!");


            // .. or in the desire to make Employee clueless about the provider,
            // I could just as well have gone with ..
            emp.Dispose();

            emp = new Employee();
            memProvider.BindAsHandler(emp);
            emp.Load(new NameValuePair("Id", newid));
            Console.WriteLine(emp.Name + " has been reloaded AGAIN!");

            WriteLine('-');

            Console.WriteLine("Press ENTER to exit ...");
            Console.ReadLine();
        }

When this executes, the output should look like this:

Saving employee: Jon
Jon has been saved!
Jon has been reloaded!
Jon has been reloaded AGAIN!
-------------------------------
Press ENTER to exit ...

There are a number of things demonstrated in this prototype demonstration program, not the least of which are better illustrated in the code samples external from Main() but are invoked by Main().

But I guess I'll start at the top.

MemoryDataProvider memProvider = new MemoryDataProvider();

MemoryDataProvider is just an object that implements IEventedRecordHandler, which only has one member, BindAsHandler(). In so doing, it provides event handlers for each of the three CRUD operations (Load, Save, Delete).

public void BindAsHandler(EventedRecord.IEventedRecord record)
{
    record.HandleLoad += new EventedRecordHandler(Record_HandleLoad);
    record.HandleDelete += new EventedRecordHandler(Record_HandleDelete);
    record.HandleSave += new EventedRecordHandler(Record_HandleSave);
}

Internally, MemoryDataProvider is just doing data storage and retrieval in a dictionary of dictionaries.

A SqlDataProvider is also partially implemented, but not tested, and certainly not in accordance to SQL best practices, but it's there, so .. there it is. *shrug*

Moving on, ..

Employee emp = new Employee(memProvider);

Employee is a class that implements IEventedRecord. This particular constructor is one example of IoC-friendly code because it introduces the dependency object at the time of its instantiation (the provider is injected during the model's creation). Here's what the constructor does under the covers:

public Employee(IEventedRecordHandler handler, params NameValuePair[] key)
 : base(handler, key)
{
}

Employee inherits from my sample base class (EventedRecordDictionaryBase) which does the impementation of IEventedRecord ahead of time and also adds this binding constructor.

public EventedRecordDictionaryBase(IEventedRecordHandler handler, params NameValuePair[] key)
 : this()
{
    handler.BindAsHandler(this);
    if (key != null && key.Length > 0)
    {
        this.Load(key);
        this.Reset(ResetMode.RetainNotDirty);
    }
}

As you can see, all passing the handler into the constructor did for us was give us an early chance to invoke its BindAsHandler() method. After that invocation is done, the provider is forgotten.

Under the covers, there is actually still a reference being maintained. The BindAsHandler() method was supposed to (and did, in this case) subscribe to the events exposed by IEventedRecord. These subscriptions are delegates that are stored in a stack that is direcly associated with this EventedRecordDictionaryBase object (again this is the base object for Employee). Literally, in the source code, you'll find the storage events named _HandleSave, _HandleLoad, and _HandleDelete, so named with underscores because the IEventedRecord event interfaces are "hidden" from the object's public interface (although they can still be accessed using an explicit cast to IEventedRecord).

private event EventedRecordHandler _HandleSave;
event EventedRecordHandler IEventedRecord.HandleSave
{
 add
 {
  _HandleSave += value;
 }
 remove
 {
  _HandleSave -= value;
 }
}
private event EventedRecordHandler _HandleDelete;
event EventedRecordHandler IEventedRecord.HandleDelete
{
 add
 {
  _HandleDelete += value;
 }
 remove
 {
  _HandleDelete -= value;
 }
}
private event EventedRecordHandler _HandleLoad;
event EventedRecordHandler IEventedRecord.HandleLoad
{
 add
 {
  _HandleLoad += value;
 }
 remove
 {
  _HandleLoad -= value;
 }
}


Notice the HandleXXX events are not exposed without explicitly casting to the IEventedRecord interface.

In EventedRecordDictionaryBase, in fact, the event handlers' delegates are even treated as a delegates chain: one at a time. Their results are compiled in a results list and returned as an array, every time. In addition to this codebase, I discussed how and perhaps why this might be done in my previous article, Part One.

Continuing in the program,

// .. or in the desire to make Employee clueless about the provider,
// I could just as well have gone with ..
emp.Dispose();

emp = new Employee();
memProvider.BindAsHandler(emp); // provider-controlled binding

Note that I invoked Dispose() just as a clean-up to explicitly drop the event subscriptions; I'm still looking for ways around doing that or whether it's even necessary.

In this scenario, I just use an empty constructor and then invoke memProvider's BindAsHandler() method directly. I can imagine this happening in mapping class scenarios, where two discrete routines are executing, one to create the employee object, the other to map the employee with the provider. In other words, where you see memProvider.BindAsHanler(emp), that invocation might occur elsewhere in a system rather than in the next line of the same routine.

Next, I start having fun with demonstrating the use of events as a logging mechanism:

emp.BeforeSave += new EventHandler(delegate(object sender, EventArgs e)
{
    Console.WriteLine("Saving employee: " + ((Employee)sender).Name);
});
emp.AfterSave += new EventHandler(delegate(object sender, EventArgs e)
{
    Console.WriteLine(((Employee)sender).Name + " has been saved!");
});

When the code executes, you'll see "Saving employee: Jon" and then "Jon has been saved!", but only after Save() is invoked (and "Jon has been saved!" should only show if saving was successful).

I've been using this approach for logging since I began writing C#. It just seemed handy to me to raise log events blindly and only handle them when needed by subscribing to the events.

Next I get to actually populate some fields

emp.Name = "Jon";
emp.Title = "Programmer";
emp.Bio = "Loser.";
emp.Phone = "999-999-9999";
emp.Ext = 101;

Nothing special about this, except I suppose it's worth the elementary observation that Employee, being already an inheritor of a fully implemented base class, is quite easy to look at and maintain by simply building on the dictionary-driven base class for field/property management:

..

public string Name
{
 get { return (string)base.GetValue("Name"); }
 set { base.SetValue("Name", value); }
}

public string Title
{
 get { return (string)base.GetValue("Title"); }
 set { base.SetValue("Title", value); }
}


..

Now, the magic happens:

emp.Save();

As I've explained many times by now, this will invoke the HandleSave event, which is handled by the MemoryDataProvider object. The implementation code isn't terribly simple, it's not just if (HandleSave != null) HandleSave(), but it does implement the Evented Method pattern I have described repeatedly.

I should note now that as I was building up this demonstration with this Employee object, I came across the ID situation. Often times when working with data, you don't know the database ID of the record you're creating until it has been persisted (INSERTed into the database, etc). So, just to make the demonstration a bit more complete, I added an [AutoIdent] attribute to the Id property, which defaults to -1 which would mean that it is uninitialized (i.e. a new record).

When Save() is invoked, the handler then scans the properties of the object being saved, and checks for an AutoIdentAttribute. My MemoryDataProvider has a sample auto-identity routine that updates the model object and assigns it a new ID. This is why the following line of code would return something that is now different than it was before emp.Save(); executed.

int newid = emp.Id;

Note that now that I'm introducing attribute checking to the persistence provider, my attributes-checking implementation is very incomplete. For example,

  • I made the assumption that all fields returned from IEventedRecord.GetValues() have field names that match the properties of the object and that these properties are reflectable.
  • I'm not "documenting" via attributes an alternate name for the data field.
  • I'm not marking some properties as persistable vs. not persistable.
  • I'm not caching the reflected type properties.

All of these should be done for both developer usability sake and for performance sake. Even so, I started something; I started a coding pattern that I can build from and I can add the above things from here, if I wanted to.

emp.Dispose();

emp = new Employee(memProvider, new NameValuePair("Id", newid));
Console.WriteLine(emp.Name + " has been reloaded!");

This code demonstrates how I've essentially destroyed my Employee object, then created a new employee object from the data provider. This time, however, while creating the Employee object, I also loaded the object by passing an identifer. Once again, I'm using the constructor to pass in my provider context, but if I wanted to do that in total isolation I can do so, which the next code demonstrates ..

// .. or in the desire to make Employee clueless about the provider,
// I could just as well have gone with ..
emp.Dispose();

emp = new Employee();
memProvider.BindAsHandler(emp);
emp.Load(new NameValuePair("Id", newid));
Console.WriteLine(emp.Name + " has been reloaded AGAIN!");

This does the exact same thing. The previous code using the constructor only wrapped this bind-and-load routine.

Conclusion (For This Part)

So that's the demonstration. I want to emphasize again how dependency injection occurred by way of an evented method pattern and how providers can implement services for objects without the objects being concerned about their implementation.

I have not yet demonstrated how testability comes into play here, but I did demonstrate IoC by way of events in a practical, real-world example. I hope people find this to be interesting and thought-provoking.

kick it on DotNetKicks.com

IoC By Way Of Simple Event Raising: Part 1

by Jon Davis 14. August 2008 20:20

There's a crappy and tiny little sample project available for this post at: IoCEvents_Part1.zip from which one can experiment with commenting things out and exciting stuff like that.

So I've noticed that the latest sexy trend for the last couple years in software development has been to adopt these frameworks that facilitate IoC, or Inversion of Control. Among the "sexy" toys that people have adopted in .NET space are Castle Windsor, Spring.net, StructureMap, et al. 

I have little doubt that I'm going to get knocked a few brownie points for this post. But I have a really tragic confession to make: I don't get it. I see the purpose of these tools, but sometimes I wonder if architects have picked up the virus that causes them to add complexity for the sake of complexity. These tools are supposed to help us make our lives easier, but the few exposures I've had of them have had me scratching my head because some of them introduce so much complexity that they seem a little silly. Maybe my brain is just a little too average in size. I like to think in simple terms.

And I get overwhelmed by the complexity when I try to read up on Castle Windsor, Rhino Mocks (which is not IoC but complements it), and other ALT.NET frameworks that have made inroads to making .NET solutions "agile" and testable. I look at things like [this] and while perhaps a lot of architects and ALT.NET folks are thinking, "cool", I'm scratching my head and thinking, "eww!!" I'm not very savvy with Rhino Mocks nor Castle Windsor but looking at blog entries like that I'm frankly scared to get to know them.

It's not that I'm stupid or that I lack knowledge or discipline. It's that I have the firm belief that things that require complex attention and thought should be isolated from the practical business logic and "everyday code", including their test code and the containing class library projects themselves. Were I to take on a role of architecure, I would do everything I can to be as mainstream and lightweight as possible, so that any junior programmer can see and understand what I'm doing with a little effort, and any senior developer can read and understand what's going on without even thinking about it. Granted, I haven't been all that great at it, but it's something to strive for, and not something I see when I watch people build up dependencies upon mocking and IoC frameworks.

Introduction Of New Dependencies Is Not Clean IoC. (Or, "Just Say 'No!' To IWindsor!")

Here's where I'm most annoyed by what little I've seen of IoC container toolkits: they introduce a dependency upon the IoC container toolkit. My first exposure to an IoC container in the workplace (as opposed to looking at someone else's solution such as Rob Conery's videos) was an awfully ugly one. Legacy code that I already despised but had been more or less stablized over years of maintenance and hand-holding, now having partial implementations of integration with Castle Windsor, was now throwing bizzare exceptions I'd never seen before. The exceptions being thrown by Castle were, to my untrained eye, utterly incomprehensible. This was while we tried to scramble to update the codebase so that my latest changes could get rolled out. I was told to add huge blocks of configuration settings in the web.config file. Adding those config file changes, now all kinds of new errors were showing up. "Oops. I guess I'm not done yet", the architect muttered. Knowing it could be days before he could be "done", and I needed to roll out my changes by the end of the day, I decided to roll back to the earlier legacy codebase that had no dependency upon Castle.

Since then, they've cleaned it up and rolled out the revisions with Castle dependency, which is great (and I've since quit) but when it comes to inversion of control I still don't understand why everything must become more brittle and more heavily dependent upon third party frameworks, particularly when the project needing IoC exists as a standalone set of business objects. Ask yourself, "What direct correlation do my business objects have with this massive IoC subsystem? Do I want to introduce kludge to this, or should my business objects be as pure as I can make them, in the interest of the original premise of IoC, which is to make the objects or functionality as completely agnostic to, but cooperative with, external projects and libraries as possible?" Or at least, that's what I think the objectives should be with IoC.

Meanwhile, I saw a video on Rob Conery's blog where he was exploring the utilization of an IoC framework, and I still have myself asking the question, why? You already have a "know it all" mapping class, why not just map things together with basic event handlers?

C# Already Has An IoC Framework. It's Called The "Event".

Dependency injection and IoC are not rocket science, or at least they shouldn't be. Theoretically, if something needs something from someone else, it should ask its runtime environment, "Um, hey can someone please handle this? I don't know how." That's why events exist in .NET.

Events in .NET are often misunderstood. They're usually only used for UI actions -- someone clicks on a button and event gets raised. Certainly, that's how the need for events started; Windows is an events-driven operating system, after all. Nearly all GUI applications in Windows rely on event pumps.

In my opinion, Java's Spring Framework, which is perhaps the most popular dependency injection framework on the planet, became popular because Java didn't have the same eventing subsystem that .NET enjoys. It's there, but, originally at least (I haven't done much with Java since 1999), it wasn't very versatile. And likewise, in my opinion, the .NET world has begun to adopt Spring.NET and Castle Windsor because non-.NET languages have always needed some kind of framework in order to manage dependency injection, and so the Morts are getting "trained in" on "the way things are done, the comp-sci way". On this, I think the ALT.NET community might've gotten it wrong. 

If you think of .NET events as being classic IoC containers instead of GUI handlers, suddenly a whole new world begins to open up.

In the GUI realm, the scenario has changed little over the last decade. When a mouse moves and a button clicks, something happens in IoC space that should sound familiar:

  1. The mouse driver doesn't know if something is going to handle the circumstances, but it is going to raise the event anyway.
  2. Windows passes the event along to the GUI application. It doesn't know if the application is going to handle the circumstances, but it is going to raise the event anyway.
  3. The application passes the event along to the control that owns the physical space of the screen coordinance of the mouse. It doesn't know if the control is going to handle the circumstances, but if the control has subscribed to the event, it is going to raise the event anyway.
  4. The control is a composite control that has a button control at the mouse's coordinates. It gets "clicked", and another event gets raised, which is the click event of a button control. The raising of the event is done internally in .NET, and .NET doesn't know how the event is going to be handled, but if the application has subscribed to the button control's event handler, it raises the event anyway.
  5. The application processes the button control's click event and invokes some kind of business functionality, such as "PayCustomer()".

In this way, Windows and the application became an IoC mapping subsystem that mapped mouse coordinates and actions to particular business functionality.

If you apply the same principle to non-GUI actions, and use events in C# interfaces for everyday object member stubs, and follow an events-driven IoC pattern, you can easily find the mapping of disperate functionality to be pluggable and fully usable as first-class functionality baked right into C#.

Events And Event Handlers Are Truly Dependency-Neutral

Event handlers are just the dynamic invocation of delegates. There are no other significant qualifications for events or event handlers, other than the obvious:

  • The events themselves must have accessors (public, protected, etc.) that enable the handler to "see" the event in the first place.
  • The raised event's argument types and the return value type might require external library reference inclusions in the event handling project.
    • On the flip side the event handler can offer dependencies that the event raiser knows nothing about, which is where dependency injection comes into play.

Most people reading this already know all about how events and delegates in C# work, but I think it's very possible that C# programmers have had difficulty putting two and two together and discovering the versatility of event delegation. I suppose it's possible that some things got assumed that shouldn't have been.

Incorrect Assumption #1: Events Should Only Be Used For GUI Conditions

The fact is you can use events pretty much anywhere you want. Got some arcane business object that "does something" now and then? You can use an event there. Seriously. But you knew that.

Incorrect Assumption #2: Events Should Strictly Adhere To .NET's Pattern Of Event Handler Signatures

The common event handler signature that Microsoft uses in .NET 2.0 is seen in the EventHandler delegate:

delegate void EventHandler (object sender, EventArgs e);

This is actually not a desirable signature for business objects. It's fine in GUI applications, and, while a little awkward, it's heavily used in ASP.NET Web Forms. But it's definitely not very useful in specifically-crafted class libraries where handler behaviors are assumed and sometimes required.

You do not need to pass object sender, although you probably should send the sender object anyway. Typed.

The reason why object sender exists in the standard pattern is because the event handler, being a delegate, can be reused to handle events having identical signatures on behalf of multiple different objects. But if there is, by design, a one-to-one relationship between the handled object and the object's handler, the sender can be safely inferred.

That said, though, if you do pass the sender along, there's no significant need to send it as type object. Send it as the lowest common denominator of what it can be. I've wasted so much time typecasting the sender in my event handlers I'm actually annoyed that "object sender" is a pattern; in Win Forms, the lowest common denominator should be System.Windows.Forms.Control, not object; and in the Web Forms world, the lowest common denominator should be System.Web.UI.Control, not object.

You do not need to pass an EventArgs object.

EventArgs is a generic argument container object. It is, in itself, an empty object, but inheriting it and adding your own properties enables you to reuse the Microsoft .NET EventHandler delegate for strongly typed events without breaking syntax barriers. Here again, the whole premise of the original .NET design is that type casting and boxing is the easy answer for managing arguments everywhere.

But it is not a silver bullet and it is certainly not the optimal way of managing interfaces.

The fact is, you can pass anything you want into your event handlers. If you want your event handler to look like this:

delegate int SomethingHappened(MyClass sender, string someString, int someInt, List<Dah> dahs);

.. instead of this ..

delegate void SomethingHappened(object sender, SomethingHappenedEventArgs e);

.. you can.

Why would you want to? Well, for one thing, using EventArgs requires you to instantiate an EventArgs object every single time you raise this event. You could use a singleton or something but that would be atrocious design and not thread-safe.

You do not need to return void.

Some people might not actually even know this, but returning void is optional. You can return a string, or anything else. For example, if an event was named "FindAString" that expected a string as a return value, the event handler can return the string value to the event raiser.

public delegate string FindAStringHandler(object someContext);

public event FindAStringHandler FindAString;

Now here your class having the event FindAString can simply invoke the event as if it was a method.

string foundString = FindAString(this);

Obviously this will fail if FindAString was not subscribed to, but if it was, it would "just work". (If it wasn't, it would raise a NullReferenceException, which incidentally is a very poor exception choice on Microsoft's part!)

Be Wary Of Multiple Subscribers

One concern to have about returning anything other than void is that an event might have multiple subscribers. When there are multiple subscribing event handlers, the last event handler's return value is captured in the normal invocation.

For this reason, I wish that Microsoft would add a keyword on event syntax in C#: single. Some other verbiage but with the same function would be fine but the idea is that if at compile-time two event subscriptions were attempted on the same object's event at the same time in the same project then a compile-time error occur, and then at runtime if two event subscriptions were made from disperate projects on the same object's event then a runtime error would occur.

Event subscriptions are stacked like a list. They're not significantly unlike List<Delegate>, although technically that's not how they're implemented.

You can evaluate the number of subscribers by accessing the GetInvocationList().Length property of the event from the scope of the object that invokes/contains it.

public event FindAStringHandler FindAString;

public string Invoke() {
    if (FindAString != null) {
        if (FindAString.GetInvocationList().Length > 1)
            throw new InvalidOperationException("Too many subscribers to FindAString event.");
        else return FindAString(this);

    } else throw new InvalidOperationException("FindAString event not handled.");

}

This in effect manually enforces the single keyword functionality I proposed above.

That said, if you want the multiple subscribers to execute, such as to obtain the return values from each of the multiple subscribers, you can simply invoke each delegate in GetInvocationList(), one at a time.

//return FindAString(this, _params);  // returns one string
List<string> s = new List<string>();
FindAString.GetInvocationList().ToList().ForEach(delegate(Delegate d)
{
    s.Add(d.DynamicInvoke(this, _params) as string);
});
return s; // returns a list of string rather than just one string
 

This introduces kludginess, I suppose. That's about three or four more lines of code than I want to write. (Yes, I just said that! Keep in mind, using events everywhere like I'm inferring here would want an absolute minimum of code to be written for frequently needed patterns.) But then that's what utility classes and other patterns are for. You can create a generic (<T>) handler to wrap this functionality.

Interfaces Can Be Eventful, Too!!

One should not forget that interfaces can also define events. You can enforce the proper exposure of an event in an implemented object and have it use the correct signature (the correct delegate).

The only caveat I can think of with regard to interfaces and events is that there is no way to use interfaces to force an exposed event to be subscribed at compile-time; you can only enforce event handling by late-checking the delegates list at runtime, i.e. checking to see if it's null.

Putting It Together

So if IoC is handled by hand using events and event handlers, where does it go? The simple answer is on the controller. Consider the classic MVC scenario. Where else would it go? I can't imagine. MVC and event delegation are already pretty much built for each other.

In my mind, a mapping class the same as one that would be used with any third party IoC framework can be used to join the event-raising objects with their event handlers. It might optionaly reference a config file to produce the mappings, or not, whatever.

 

This diagram might be trash but I suppose it's something for the sake of discussion. Maybe consider the ability to swap out the controller with a mock controller as well. *shrug*

I decided to slap on ": Part 1" at the end of this blog entry's title because I still want to prove out how simple events and event handling can pull off the requirements of most of the standard testability scenarios that people using these other frameworks are using. But for the most part I *think* that this is a reasonable notion: Let's go back to the drawing board and simplify our codebases. What is it we're trying to do? What does C# not already have that cannot be enforced with some simple patterns already made available since C# was concieved?

kick it on DotNetKicks.com

Currently rated 5.0 by 1 people

  • Currently 5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , , , , ,

Software Development | C#

How To Get SQL Server 2005 Express In The Midst Of The SQL Server 2008 Express Push

by Jon Davis 13. August 2008 20:10

So, Microsoft went gold with SQL Server 2008, including the core engine flavor (but no basic management studio) of the Express edition. Horray!

Problem is, Visual Studio 2008's database development project support is built entirely around SQL Server 2005 Express, not SQL Server 2008 Express, and I for one made the mistake of dropping SQL Server 2005 Express to install SQL Server 2008 Express and hoped that there was sufficient compatibility to keep my database projects supported. Not so.

So I had to go back to v2005, but now the problem was that there was NO LINK on Microsoft's web site to get back to SQL Server 2005 Express.

I had to rely on Google to acquire this:

http://www.microsoft.com/downloads/details.aspx?familyid=5B5528B9-13E1-4DB9-A3FC-82116D598C3D&displaylang=en

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Why I'm Unimpressed With Rawness Of Skillz

by Jon Davis 7. August 2008 06:40

Since forever, geeks who take themselves seriously have loved to brag such things as, "I use Notepad to edit web pages". Carrying this over to actual programming, "I never click into the designer when editing my ASPX", or "I never design a database using designer tools, I always design it all using raw T-SQL," or "I always update my SVN from the command line". (Someone in a local tech user group bears the post signature, "Real men use Notepad.")

Puhleeze. I'm not impressed, and frankly I think anyone who brags like this should get a swift kick in the pants.

IMO, there are three levels of elevation to guruism:

  1. Awareness: Discovering the tech and the tools (like the WYSIWYG web editor .. "I'm a WEB MASTER, and you can, too!").
  2. Intelligence: Swearing by Notepad and proudly refusing to use the WYSIWYG editor.
  3. Wisdom: Knowing when to use the right tool at the right time in order to either save time or to produce the best output. Yes, that means being fully capable of staying away from the WYSIWYG editor or the designers, but it also means being completely, 100% unafraid of such tools if they serve the purpose of helping you write better code, more productively.

I get really turned off when co-workers smirk and look down their noses at me when I mention that I'm a tools collector, as if their refusal to use anything but the textual view of SQL Query Analyzer, the C# plain-text editor, and the command prompt somehow made them superior. The fact of the matter is, these are the people who produce output that share predictable characteristics:

  • Web pages are thrown together without thought to design.
  • Web page markup is excessive due to hit-and-miss browser testing rather than design-mode utilization.
  • Code is disorganized and messy.
  • Class libraries and databases are designed ad hoc and without thought towards the bigger, conceptual picture.
  • Databases lack indexes and referential integrity.
  • Buggy implementations take ages to be debugged due to refusal to fire up a debugger.

Yes, let's look at that last item. I don't know about you, but I am, and have always been, an F5'er. (F5 invokes the Debug mode in Visual Studio.)

Learn how to debug. With a debugger.

At a previous job, I discovered for the first time in my career what it was like to be surrounded by hard core engineering staff who refused to hit F5. Now, granted, the primary solution that was fired up in Visual Studio took literally over a minute to compile--that means F5 would require a one-minute wait for the simplest of changes if it wasn't already running in Debug mode. But even so, it's such a straightforward and clean way to get to the root of a problem that I don't see how, or why, anyone would want to do without a solid debugger to begin with?

Re-invoking code and then reading the resulting error messages is not an acceptable debugging methodology.

Instead, set breakpoints and use the introspection tools. Here's how I debug:

  1. Set a breakpoint at the top of the stack (where the code begins to execute). If using browser-side Javascript, add the line "debugger;" to the code.
  2. Hit F5.
  3. If the user (that's me at this point) needs to do something to get it to reach the breakpoint, do it.
  4. Once the breakpoint is reached use F10 (Step Over) or F11 (Step Into) to follow the execution path.
    • Always watch the value of each and every variable before proceeding to the next line of code. I can monitor variables by monitoring the Locals window, or if some method needs to execute to fetch a value or if he variable is in broad scope then I put it in the Watch window.
    • Always watch the values of each and every source property before it gets assigned to something, by hovering over it with the mouse and letting the tooltip appear that exposes its value. For example, in "x = myObject.Property;", only myObject will appear in the Locals window, and I won't see the value being assigned until it is already assigned, unless I hover over ".Property" or add it to my Watch window.
  5. If a nuisance try...catch routinely occurs such that it becomes difficult or tiresome to find where in the stack trace the exception was thrown, I might try commenting out the "try" and the "catch", or find the option in the Exceptions dialog (to find that dialog you'll have to right-click a toolbar and choose Customize and find the menu item to drag it up to the menubar and add it, as it's not there by default) that stops the debugger on all exceptions.

90% of the time, I can catch a bug by careful introspection in this manner within a couple minutes.

What the "raw skillz" folks would rather do is go backwards. Oh, it's puking on the data? Let's go to the database! Fire up SQL Query Analyzer! SELECT this! F5! SELECT that! F5! (F5 in SQL Query Analyzer, or Query view for SQL Management Studio, doesn't debug. It executes, raw. SQL doesn't have much debugging support.) Hmm, why's that data bad? Let's clean it up! UPDATE MyTable SET SomeField = CorrectValue WHERE SomeField = WrongValue ... Now, why'd this happen? Why's it still not working? I dunno!!

Oh just kill me now. That's not fixing bugs, that's fixing symptoms. If roaches ate all the pizza, this would be like replacing the pizza where it sat. Feast!!

Worse yet is when the whole system is down and the fellas are sitting there doing a code review in the effort to debug. Good lord. Shouldn't that code review come before the system went live? And, once again, F5 can and should save the day in no time at all.

Use SQL Profiler and the management code libraries.

In the SQL Server world, the closest equivalent to Visual Studio's F5 is the SQL Profiler. If you're seeing the database get corrupted and you're trying to troubleshoot and figure out why, use the Profiler. There's also the management libraries, which might provide some insight in the goings on in database transactions, from a programmatic perspective.

Ironically, shortly after I joined the team at my previous job, I introduced SMO to the database guru. Nearly two years later, after I had put in my resignation, the same fellow introduced me to SMO, apparently forgetting that I introduced it to him to begin with. But in neither case did either of us actually do much, if anything, with SMO.

SQL transactions are a tool. Use them.

There's nothing like watching a database get corrupted because of some bug, but it's despicable when it stays that way because the failure didn't get rolled back. Always build up a transaction and then commit only after doing a verification.

Don't hand-code database interop with user views. 

Let's look at ORM tools. Put simply,

  • If it saves coding and management time, it's an essential utility.
  • If it performs like mollasses, its crap.
  • If it is always dispensible, it's accepable.
  • If it gets "rusty", needs routine maintenance, or was built on a home-grown effort, it's junk.

Code generators are iffy. They're great and wonderful, if only there are enough licenses to go around and they're always working. I was recently in a team that use CodeSmith, but the home-grown templates broke with the upgrade to a recent version of CodeSmith, so everything died out. Furthermore, all of the utilization of CodeSmith revolved around a home-grown set of templates that targeted a single project, and no other templates were used. And last but not least, there were only two or three licenses, and about four or five of us. So between these three failure points, it was shocking to me when my boss got upset with me for daring to want to deviate away from CodeSmith and consider an alternate tool for ORM such as MyGeneration or SubSonic when I began working on a whole new project.

Later, I met the same frustration when LINQ arrived. Hello? It's only as non-performant as one's incapacity to learn how it ticks. And it's only as unavailable as our willingness to install .NET 3.5, and, by the way, .NET 3.5 is NOT a new runtime, like 3.0 it is just some add-on DLLs to v2.0.

Writing code should be tools-driven too.

Do basic designs before writing code. Make use of IntelliSense (for SQL, take a look at SQL Prompt). Use third party tools like Resharper, CodeRush, and Refactor! Pro. Mind you, I'm a hypocrite in this area; I tried Resharper and ran into performance and stability issues so I uninstalled it. I have yet to give the latest version a try, and same is true of the other two. But some of the most successful innovators in the industry hardly know how to function without Resharper. It doesn't speak well for them, but it does speak well for Resharper. There are lots of other similar tools out there as well.

UPDATE (8/26/2008): I've finally made the personal investment in Resharper. We'll see how well it pays off.

Don't be afraid of the ASPX designer mode.

I like to use it to validate my markup. Sometimes I accidentally miss a closing '>' or something, and the designer mode would reveal that to me much faster than if I attempted to execute the project locally. Sometimes it also helps to just be able to drag an ASP.NET control on the page and edit its attributes using the Properties window; this is purely a matter of productivity, not of competence, and fortunately the code editor supports InteliSense sufficiently enough that I could accomplish the same job without the Designer mode, it would just be a little be more work and, being manual, a bit more prone to human error.

Automate your deployments.

Speaking of human error, I have never been more impressed by the sheer recklessness of team workflow than the routine manual deployment of a codebase across a server farm. At a previous job, code pushes to production would go out sometimes once a week and sometimes every day, and each time it took about half an hour of extreme concentration by the person deploying. This person would be extremely irritable and couldn't handle converations or questions or chatter until deployment completed. Regularly, I asked, "Why hasn't this been automated yet? You can bump those thirty minutes of focus down to about one minute of auto-piloting." The response was always the same: "It's not that hard."

To this day I have no idea what on earth they were thinking, except that perhaps they were somehow proud of going raw--raw as in naked and vulnerable, such being the nature of manual labor. Going raw is stupid and dangerous. One wrong move can hurt or even destroy things (like time, sanity, and/or reputation). There's nothing to be proud of there. Thrill seekers in production environments don't belong in the workplace. Neither does insistence upon wasting time.

Design like you care.

Designers aren't just good for web layouts. I've particularly noticed how supposed SQL gurus who don't design database tables using the designer and prefer to just write the CREATE TABLE code by hand tend to leave out really important and essential design characteristics, like relational integrity (setting up foreign key constraints), or creating alternate indexes. Just because you can create a table in raw T-SQL doesn't mean you should.

The designers are essential in helping you think about the bigger picture and how everything ties together -- how things are designed. Quick and dirty CREATE TABLE code only serves one purpose, and that is to put data placeholders into place so that you can map your biz objects to the database. It doesn't do anything for RDBMS database design.

I used to use the Database Diagrams a lot, although I don't anymore simply because I hate the dialogue box that asks me if I want to add the diagram table to the schema. Even so, I'm not against using it, as it exposes an important visual representation of the referential integrity of the existing objects.

Failing that, though, lately I've been getting by with opening each table's Design view and choosing "Relationships" or "Indexes/Keys". I then use LINQ-to-SQL's database diagram designer, where inferred relationships are clearly laid out for me, assuming I'm using LINQ as an ORM in a C# project. If I see a missing relationship, I'll go back to the database definition, fix it, and then drop and re-add the objects in the LINQ-to-SQL designer diagram after refreshing the Server Explorer tree in Visual Studio.

vi is better than Notepad.

If you must edit a text file in a plain text editor, vim is better than Notepad. No clicky of the mouse or futzing with arrow keys. The learning curve is awkward, but NOTHING like Emacs so count your blessings.

I'm kidding, but the point is that there's nothing "manly" about Notepad. Of course, for the GUI-driven Windows world, better than vi or vim or anything like that, these two free Notepad replacements are pretty nice, I use both of them.

In any case, there's nothing wrong with using Notepad or some plain toolset to do a job, but only if you're using the simpler toolset out of lack of available tools. You might not want to wait for two minutes for Visual Studio to load on crummy hardware. You don't want to wait for something to compile. Whatever the limitation, it's okay.

But please, don't look down on those of us who opt for wisdom in choosing time-saver tools when appropriate, you're really not helping anybody except for your own rediculously meaningless and vain ego.

kick it on DotNetKicks.com

Currently rated 3.5 by 2 people

  • Currently 3.5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

Software Development | Opinion

Being Half-Asleep Is Like Being Drunk

by Jon Davis 3. August 2008 22:55

You wouldn't drink and drive. Would you drive at 3 AM having had no sleep? Of course you would. And if you stayed up all night, working or playing, and managed to make it to the office the next day, would you drive yourself home at the end of the day?

Sure you would. I do it all the time, and I'm just like you. ;) :D

Seriously, though, I've done that a few times on crunch work weeks, and I recently made the mistake of engaging in this practice while maneuvering a job hunt. That's right, I showed up for a job interview last week--among people who know my name from .NET user groups and from former employees of my last job's workplace--after spending most of the night trying to cram code in Visual Studio so that my last week on the job (which I resigned from voluntarily, mind you) would be left in a professional, job-complete manner. Only I compromised the potential next job. Not smart.

I showed up and complained about the previous job. They didn't even ask for it. I was just trying to keep my eyes open while whining about how sad I feel that it came to this (my resignation). I'm lucky I didn't reach over and try to get a hug from the interviewing manager. The guy was like, "I've had enough, I'm done, any questions?" I tortured them after that with stupid accusations of having rumored reputations of boxing senior programmers into trivial, mundane tasks and not exposing them to the bigger picture, among other things.

So here's my advice to myself: You're an idiot. Seriously, dude, stay balanced, and don't drink and drive.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  May 2018  >>
MoTuWeThFrSaSu
30123456
78910111213
14151617181920
21222324252627
28293031123
45678910

View posts in large calendar