IoC By Way Of Simple Event Raising: Part 2 - Contained Interfaces and Evented Methods

by Jon Davis 18. August 2008 18:21

This post is Part Two of on the topic of "IoC By Way Of Simple Event Raising".

Previous parts:

 This post has an associated source code package: IoCEvents_Part2.zip

In Part One of this discussion, I argued that people have become so dependent upon IoC frameworks that they have forgotten the clean, simple, and very versatile IoC framework already bundled in C# in the forms of interfaces and of C#'s eventing subsystem. I ended the article indicating that I felt it was necessary to prove out the testability of a system that uses plain and simple eventing as an IoC methodology.

That proof of testability will have to wait until Part Three (or later). In this part I want to prove out a somewhat useful prototype of real-world dependency injection using eventing. The scenario I chose is the classic data model / data provider scenario:

  • A data model needs to persist itself,
  • A data provider performs the persisting, independent from the model itself

And,

  • A data persistence provider makes itself available to persist the data model by subscribing to the model's events.
  • The provider is one among multiple providers that each can handle the data model by subscribing to the model's event interface definitions.

In .NET, an abstract, hot-swappable data provider scenario is already supported with ADO.NET / System.Data. I'm not trying to reinvent the wheel here, rather I'm just trying to demonstrate a prototype for the purposes of discussion.

Disclaimer

I'm prototyping a lot of ideas that are original to me, but not original to computer science. So terms, verbiage, diagrams, et al, might be a little bit different than is taught in college or trained in enterprise shops. I'm also still introducing myself to some of this stuff, being for example untrained in Castle Windsor and other IoC containers (due to my distaste), so I might be off on some things. But in such cases, I would greatly appreciate constructive correction, and if my commenting system below this article doesn't work, you can e-mail me at: jon -AT- jondavis.net.

Provider Dependent Interfaces, IoC Subsystems, Contained Interfaces, and Evented Methods

Provider Dependent Interfaces

Classically, a provider would provide all of the CRUD operation interfaces on behalf of a data model, and there would be a one-to-one relationship between the application and the provider and between the provider and the model. Switching out one provider for another would result in a potentially different type of model altogether--which is obviously a matter of concern. And to perform actions against the model, the application would have to go straight to the provider to manage that detail.

Therein lies one of the problems. The application code has to know too much to manage the data, and usually, the model would be completely clueless about how to persist itself back to its origin and would be completely dependent upon the application logic to keep track of the provider. The provider and/or the application would own everything related to how to manage data persistence.

Another direction one might take is to provide provider hook interfaces within the model that allows the model to "speak to" the provider whereby:

  1. a global provider exists,
  2. a rich suite of data interop code already flushes out most of these scenarios, or
  3. the model retains a common API reference back to the provider (2-way interfaces), which I'll discuss later.

In the first scenario, the model might expose a "Save()" method, which would invoke a global utility library to persist itself back to the database. This scenario is obviously very limiting, in the same way that global-anything in general is frowned upon. What if, for example, you had two disperate data sources? What if they persist in completely incompatible ways?

In the second scenario, Visual Studio (for instance) has a rich set of tools and controls whereby you can hot-swap one set of ADO.NET resources for another. These tools support not only SQL SELECTs but all of the CRUD operations associated with basic data access and manipulation. The problem then becomes that the application logic gets built upon this persistence framework that is ADO.NET and its support controls, and the developer has to constantly think in ADO.NET terms, which tend to get quite messy when real-world business and domain logic starts to surround the data models, rather than using small, discrete, and simple business objects that expose a persistence pattern while being essentially C# objects, inheriting from anything, not DataRows or DataReaders. Further, Visual Studio-style data binding in general encourages a one-to-one relationship between views and data models, which is not often appropriate. Fortunately, Visual Studio does go a long way to support data binding to objects, but it is rarely used, or used correctly, from what I have seen. While we're sidetracked on data APIs, these things said, Microsoft is going a long way to make the second scenario fully work in most scenarios, these seen in LINQ and in the new Entities initiative. There are other very extensive libraries such as NHibernate or EntitySpaces that manage this stuff well. I am not about to discuss the merits of these; they are very useful. My objective is not to look at data APIs but rather model dependencies in general C# terms.

IoC Subsystems

What IoC container libraries typically do in this scenario is to provide an abstraction interface that invokes the provider functionality on behalf of the model in a mediator role. It acts as a "man in the middle". In that way, the application code can be clueless but trusting about how the operation is accomplished, the mappings can be swapped out in an isolated module from the logic, and the model can make assertations and take actions in the context of its provider, without knowing who its provider is.

The benefits of these IoC containers are clear. You have one set of interfaces to manage data mangling, and that set of interfaces is essentially a proxy. The proxy acts as a mapping service that can be rigged and re-rigged for isolating components, and as such also transparently support object mocking.

What I argued in my previous entry was that there are serious issues I have with these IoC container systems:

  1. They require a dependency upon the IoC container library (Castle Windsor, et al). This introduces more moving parts and more areas where the application can be prone to failure. It's not only that it's a dependency upon a third party software solution that may or may not have bugs, it's that the maintainers of the system have something else to worry about:
    • Potential fail points
    • Learning curves
    • Extra configuration
    • The oxymoron that is the dependency upon the dependency injection mechanism
  2. It's a massive new learning curve for those maintaining the application. When working with business objects, the last thing a developer should have to worry about is the support infrastructure, and the whole idea of all of the objects being tailored aound the IoC container API suggests that the whole thing just got a lot messier and kludgier, rather than cleaner.
  3. No two IoC container subsystems are anywhere near like each other, so switching from one system to another may require a huge refactoring project.
Contained Interfaces

Going back to classic IoC, another scenario is where the model, being lightweight, retains a lightweight reference to the provider where it came from. I'm not really talking about data classes like ADO.NET's DataReader, which has to have a reference to its provider in order to support things like Read(), rather I'm talking about POCO (Plain Old C# Objects), your domain objects, that implement an interface that exposes a standard interface reference to a provider that can handle all of its CRUD operations. So, when, for example, a Save() method gets invoked, it gets invoked directly on the model, and the model then turns around and tells its referenced provider to do its job on its behalf.

As long as the interface is established and is standardized, it doesn't matter what implementation the interface actually uses. A controller can inject the implementation of the provider directly to the model as soon as it is appropriate.

In this way, interfaces themselves facilitate the core bread and butter for IoC. (Not so with some of the IoC containers previously described, as they often tend to use generics and delegate placeholders to wrap everything in actual implementation code.) There's nothing new here; this has been the premise of software since C++ and COM for many, many years. But it comes really close to what I'm trying to argue here. We've been building up a whole new breed of developers who want to fix what ain't broke.

To be honest, I feel torn at this point between the basic contained interfaces scenario versus the events-based IoC I'm proposing, but there are pros and cons of each, and of both together, which I'll get into in a moment.

Evented Methods 

I'd like to play with terms here and suggest a pattern name: Evented Method. I haven't seen this pattern demonstrated much before, so that's why I feel like I'm suggesting something many people haven't considered much. Pehaps there is a reason why this pattern isn't used much, but I'm just not sure what the reason is, which is why I'm blogging it, maybe someone can tell me, it seems WAY too versatile to overlook.

The idea of Evented Methods is basically forwarding the invocation of a method to an event so that the method can be handled by an unknown outside provider, always.

In the context of a data model, for the purposes of demonstration, I would like to call a data model that implements the Evented Method model as an Evented Record. An Evented Record then is a variation on the Active Record pattern, which might infer:

  • "new" constructor creates a new record or, with constructor parameters, loads an existing record into a POCO data model object 
  • properties exposed on the object are assumed to be data fields
  • the data model can be loaded or reloaded from its data source 
  • a "save" function should persist the state of the object (insert or update)
  • a "delete" function should remove the record from the database or from the data file

But instead of implementing these functions, the data model only exposes them as interfaces and keeps them that way. There is no implementation of "save" with the data model, yet the data model itself can be called upon by the app layer, where the model's Save() method is treated by the application logic as, "You there, save yourself!!" The model does this by having delegate references to the provider interfaces diretly in the object. So if "Save" is called upon from the app to the model, the model will simply pass the request along and say, "Hello out there? Can someone please do this saving action for me?" The provider, which should already be subscribed to the object, "sees" the event and handles it as if it was called directly.

This is a variation on Contained Interfaces I previously described, where the model has a reference to the interface of the provider and invokes it. The difference here is the form of the interface bindings, using event subscriptions (events pointing to delegates, which then point to providers) rather than actual interface references (properties/fields pointing to providers).

Note: This Evented Method pattern is actually an implementation subset of the delegation pattern.

There are several reasons why one might favor the event-based interfacing over the Contained Interfaces I previously described. Among them,

  • Raising an event can be handled by multiple subscribers. This can be useful for things like logging and auditing.
  • Events facilitate an assignment syntax that enables plug-in-like support without the "stepping on toes" of property assignment whereby a prior assignment might have already been made.
  • The isolated nature of exposing interfaces via events rather than implementation properties gives application developers and modelers greater ability to focus less on the provider interfaces and more on the domain code.
  • Some might argue that the model should only know about itself and know very little or nothing about its providers, not even its interfaces.
  • The nature of the behavior is out-bound, arguably inferring a "needs of" or a "happening to" the model rather than an incoming "thou shalt" command from the model.
  • Event handlers are invisible to the outside world -- only the event raiser can see the event subscribers. 

On the other hand, there are also arguments one could make in favor of simple contained interfaces.

  • A simple but complete contained interface can provide predictability of the nature of the model's providers to the model.
  • A strict one-to-one relationship (not one-to-potentially-many) between the model and the provider can be enforced.
  • The invocation relationship between the model and its provider is much more trivial.

Like I said, I am torn. I am not, however, sold on IoC container libraries yet, though, because, as I am about to demonstrate, there isn't much need for them except perhaps in extremely complex scenarios.

The Biggest Problem With Contained Interfaces and Evented Methods I've Found So Far ...

There is one serious caveat with both contained interfaces and evented methods, and that is that the data models must be properly disposed of. In the disposing of these objects, the references to the providers must be dropped. Failure to do this can result in the failure of the model and/or the provider to be destroyed, which can result in memory leaks, in providers running out of resources, or, in some rare cases, data corruption or application faults due to delayed finalization of the objects.

If the developer simply implements and invokes IDisposable where appropriate, things should behave correctly.

Evented Records Demonstrated

The code sample I provided at the top of this article is something I produced entirely for the sake of this discussion and took me about a day to produce.

The EventedRecord project is a provider-agnostic data persistence prototype library that illustrates the use of evented methods as a form of contained interfaces whereby inversion of control (not yet fully demonstrated) can be applied. It demonstrates:

  • The use of an interface (IEventedRecord) to define how events are used to delegate (v) the CRUD operations to the provider(s)
    • HandleXXX events (HandleSave, HandleLoad, HandleDelete)
      • Always returns an array of EventedResult (one per each subscriber to the event)
    • Save(), Load(), and Delete() interfaces for application code to be able to invoke
  • An example implementation abstract base that
    • Uses a dictionary object to maintain data that
      • maintains dirtiness state, and
      • raises events before and after changes to properties
    • Raises events before and after saving
    • Can reset itself to conform to the original data that was originally loaded into it before changes began
    • Hides the ugly and, to application logic, irrelevant "HandleXXX" events from its IntelliSense members list
    • Invokes the event subscribers one at a time, generating a list of results. 
    • In the event of a failure (Exception), the list of results so far are compiled and passed into the resulting custom exception that is thrown on behalf of the application.
  • A basic data provider interface or handling data objects by way of events.
    • Nothing to it, just:
       
        void BindAsHandler(IEventedRecord record);
       
    • The expectation is that it would subscribe to all the IEventedRecord object's "HandleXXX" events that it can handle.

This diagram demonstrates the initial IEventedRecord interface definition as provided in the source code at the top of this article, and why it is called what it is called.

* I put a yellow asterisk on GetValues() to mention that it is one potential way for the handler to get to the data fields (name and value collection). It should return a Dictionary<string, object>. 

The console project code demonstrates how it is used:

static void Main(string[] args)
        {
            MemoryDataProvider memProvider = new MemoryDataProvider();
            Employee emp = new Employee(memProvider);

            // .. or in the desire to make Employee clueless about the provider,
            // I could just as well have gone with ..
           
emp.Dispose();

            emp = new Employee();
            memProvider.BindAsHandler(emp); // provider-controlled binding


            emp.BeforeSave += new EventHandler(delegate(object sender, EventArgs e)
            {
                Console.WriteLine("Saving employee: " + ((Employee)sender).Name);
            });
            emp.AfterSave += new EventHandler(delegate(object sender, EventArgs e)
            {
                Console.WriteLine(((Employee)sender).Name + " has been saved!");
            });

            emp.Name = "Jon";
            emp.Title = "Programmer";
            emp.Bio = "Loser.";
            emp.Phone = "999-999-9999";
            emp.Ext = 101;
            emp.Save();
            int newid = emp.Id;

            emp.Dispose();

            emp = new Employee(memProvider, new NameValuePair("Id", newid));
            Console.WriteLine(emp.Name + " has been reloaded!");


            // .. or in the desire to make Employee clueless about the provider,
            // I could just as well have gone with ..
            emp.Dispose();

            emp = new Employee();
            memProvider.BindAsHandler(emp);
            emp.Load(new NameValuePair("Id", newid));
            Console.WriteLine(emp.Name + " has been reloaded AGAIN!");

            WriteLine('-');

            Console.WriteLine("Press ENTER to exit ...");
            Console.ReadLine();
        }

When this executes, the output should look like this:

Saving employee: Jon
Jon has been saved!
Jon has been reloaded!
Jon has been reloaded AGAIN!
-------------------------------
Press ENTER to exit ...

There are a number of things demonstrated in this prototype demonstration program, not the least of which are better illustrated in the code samples external from Main() but are invoked by Main().

But I guess I'll start at the top.

MemoryDataProvider memProvider = new MemoryDataProvider();

MemoryDataProvider is just an object that implements IEventedRecordHandler, which only has one member, BindAsHandler(). In so doing, it provides event handlers for each of the three CRUD operations (Load, Save, Delete).

public void BindAsHandler(EventedRecord.IEventedRecord record)
{
    record.HandleLoad += new EventedRecordHandler(Record_HandleLoad);
    record.HandleDelete += new EventedRecordHandler(Record_HandleDelete);
    record.HandleSave += new EventedRecordHandler(Record_HandleSave);
}

Internally, MemoryDataProvider is just doing data storage and retrieval in a dictionary of dictionaries.

A SqlDataProvider is also partially implemented, but not tested, and certainly not in accordance to SQL best practices, but it's there, so .. there it is. *shrug*

Moving on, ..

Employee emp = new Employee(memProvider);

Employee is a class that implements IEventedRecord. This particular constructor is one example of IoC-friendly code because it introduces the dependency object at the time of its instantiation (the provider is injected during the model's creation). Here's what the constructor does under the covers:

public Employee(IEventedRecordHandler handler, params NameValuePair[] key)
 : base(handler, key)
{
}

Employee inherits from my sample base class (EventedRecordDictionaryBase) which does the impementation of IEventedRecord ahead of time and also adds this binding constructor.

public EventedRecordDictionaryBase(IEventedRecordHandler handler, params NameValuePair[] key)
 : this()
{
    handler.BindAsHandler(this);
    if (key != null && key.Length > 0)
    {
        this.Load(key);
        this.Reset(ResetMode.RetainNotDirty);
    }
}

As you can see, all passing the handler into the constructor did for us was give us an early chance to invoke its BindAsHandler() method. After that invocation is done, the provider is forgotten.

Under the covers, there is actually still a reference being maintained. The BindAsHandler() method was supposed to (and did, in this case) subscribe to the events exposed by IEventedRecord. These subscriptions are delegates that are stored in a stack that is direcly associated with this EventedRecordDictionaryBase object (again this is the base object for Employee). Literally, in the source code, you'll find the storage events named _HandleSave, _HandleLoad, and _HandleDelete, so named with underscores because the IEventedRecord event interfaces are "hidden" from the object's public interface (although they can still be accessed using an explicit cast to IEventedRecord).

private event EventedRecordHandler _HandleSave;
event EventedRecordHandler IEventedRecord.HandleSave
{
 add
 {
  _HandleSave += value;
 }
 remove
 {
  _HandleSave -= value;
 }
}
private event EventedRecordHandler _HandleDelete;
event EventedRecordHandler IEventedRecord.HandleDelete
{
 add
 {
  _HandleDelete += value;
 }
 remove
 {
  _HandleDelete -= value;
 }
}
private event EventedRecordHandler _HandleLoad;
event EventedRecordHandler IEventedRecord.HandleLoad
{
 add
 {
  _HandleLoad += value;
 }
 remove
 {
  _HandleLoad -= value;
 }
}


Notice the HandleXXX events are not exposed without explicitly casting to the IEventedRecord interface.

In EventedRecordDictionaryBase, in fact, the event handlers' delegates are even treated as a delegates chain: one at a time. Their results are compiled in a results list and returned as an array, every time. In addition to this codebase, I discussed how and perhaps why this might be done in my previous article, Part One.

Continuing in the program,

// .. or in the desire to make Employee clueless about the provider,
// I could just as well have gone with ..
emp.Dispose();

emp = new Employee();
memProvider.BindAsHandler(emp); // provider-controlled binding

Note that I invoked Dispose() just as a clean-up to explicitly drop the event subscriptions; I'm still looking for ways around doing that or whether it's even necessary.

In this scenario, I just use an empty constructor and then invoke memProvider's BindAsHandler() method directly. I can imagine this happening in mapping class scenarios, where two discrete routines are executing, one to create the employee object, the other to map the employee with the provider. In other words, where you see memProvider.BindAsHanler(emp), that invocation might occur elsewhere in a system rather than in the next line of the same routine.

Next, I start having fun with demonstrating the use of events as a logging mechanism:

emp.BeforeSave += new EventHandler(delegate(object sender, EventArgs e)
{
    Console.WriteLine("Saving employee: " + ((Employee)sender).Name);
});
emp.AfterSave += new EventHandler(delegate(object sender, EventArgs e)
{
    Console.WriteLine(((Employee)sender).Name + " has been saved!");
});

When the code executes, you'll see "Saving employee: Jon" and then "Jon has been saved!", but only after Save() is invoked (and "Jon has been saved!" should only show if saving was successful).

I've been using this approach for logging since I began writing C#. It just seemed handy to me to raise log events blindly and only handle them when needed by subscribing to the events.

Next I get to actually populate some fields

emp.Name = "Jon";
emp.Title = "Programmer";
emp.Bio = "Loser.";
emp.Phone = "999-999-9999";
emp.Ext = 101;

Nothing special about this, except I suppose it's worth the elementary observation that Employee, being already an inheritor of a fully implemented base class, is quite easy to look at and maintain by simply building on the dictionary-driven base class for field/property management:

..

public string Name
{
 get { return (string)base.GetValue("Name"); }
 set { base.SetValue("Name", value); }
}

public string Title
{
 get { return (string)base.GetValue("Title"); }
 set { base.SetValue("Title", value); }
}


..

Now, the magic happens:

emp.Save();

As I've explained many times by now, this will invoke the HandleSave event, which is handled by the MemoryDataProvider object. The implementation code isn't terribly simple, it's not just if (HandleSave != null) HandleSave(), but it does implement the Evented Method pattern I have described repeatedly.

I should note now that as I was building up this demonstration with this Employee object, I came across the ID situation. Often times when working with data, you don't know the database ID of the record you're creating until it has been persisted (INSERTed into the database, etc). So, just to make the demonstration a bit more complete, I added an [AutoIdent] attribute to the Id property, which defaults to -1 which would mean that it is uninitialized (i.e. a new record).

When Save() is invoked, the handler then scans the properties of the object being saved, and checks for an AutoIdentAttribute. My MemoryDataProvider has a sample auto-identity routine that updates the model object and assigns it a new ID. This is why the following line of code would return something that is now different than it was before emp.Save(); executed.

int newid = emp.Id;

Note that now that I'm introducing attribute checking to the persistence provider, my attributes-checking implementation is very incomplete. For example,

  • I made the assumption that all fields returned from IEventedRecord.GetValues() have field names that match the properties of the object and that these properties are reflectable.
  • I'm not "documenting" via attributes an alternate name for the data field.
  • I'm not marking some properties as persistable vs. not persistable.
  • I'm not caching the reflected type properties.

All of these should be done for both developer usability sake and for performance sake. Even so, I started something; I started a coding pattern that I can build from and I can add the above things from here, if I wanted to.

emp.Dispose();

emp = new Employee(memProvider, new NameValuePair("Id", newid));
Console.WriteLine(emp.Name + " has been reloaded!");

This code demonstrates how I've essentially destroyed my Employee object, then created a new employee object from the data provider. This time, however, while creating the Employee object, I also loaded the object by passing an identifer. Once again, I'm using the constructor to pass in my provider context, but if I wanted to do that in total isolation I can do so, which the next code demonstrates ..

// .. or in the desire to make Employee clueless about the provider,
// I could just as well have gone with ..
emp.Dispose();

emp = new Employee();
memProvider.BindAsHandler(emp);
emp.Load(new NameValuePair("Id", newid));
Console.WriteLine(emp.Name + " has been reloaded AGAIN!");

This does the exact same thing. The previous code using the constructor only wrapped this bind-and-load routine.

Conclusion (For This Part)

So that's the demonstration. I want to emphasize again how dependency injection occurred by way of an evented method pattern and how providers can implement services for objects without the objects being concerned about their implementation.

I have not yet demonstrated how testability comes into play here, but I did demonstrate IoC by way of events in a practical, real-world example. I hope people find this to be interesting and thought-provoking.

kick it on DotNetKicks.com

IoC By Way Of Simple Event Raising: Part 1

by Jon Davis 14. August 2008 20:20

There's a crappy and tiny little sample project available for this post at: IoCEvents_Part1.zip from which one can experiment with commenting things out and exciting stuff like that.

So I've noticed that the latest sexy trend for the last couple years in software development has been to adopt these frameworks that facilitate IoC, or Inversion of Control. Among the "sexy" toys that people have adopted in .NET space are Castle Windsor, Spring.net, StructureMap, et al. 

I have little doubt that I'm going to get knocked a few brownie points for this post. But I have a really tragic confession to make: I don't get it. I see the purpose of these tools, but sometimes I wonder if architects have picked up the virus that causes them to add complexity for the sake of complexity. These tools are supposed to help us make our lives easier, but the few exposures I've had of them have had me scratching my head because some of them introduce so much complexity that they seem a little silly. Maybe my brain is just a little too average in size. I like to think in simple terms.

And I get overwhelmed by the complexity when I try to read up on Castle Windsor, Rhino Mocks (which is not IoC but complements it), and other ALT.NET frameworks that have made inroads to making .NET solutions "agile" and testable. I look at things like [this] and while perhaps a lot of architects and ALT.NET folks are thinking, "cool", I'm scratching my head and thinking, "eww!!" I'm not very savvy with Rhino Mocks nor Castle Windsor but looking at blog entries like that I'm frankly scared to get to know them.

It's not that I'm stupid or that I lack knowledge or discipline. It's that I have the firm belief that things that require complex attention and thought should be isolated from the practical business logic and "everyday code", including their test code and the containing class library projects themselves. Were I to take on a role of architecure, I would do everything I can to be as mainstream and lightweight as possible, so that any junior programmer can see and understand what I'm doing with a little effort, and any senior developer can read and understand what's going on without even thinking about it. Granted, I haven't been all that great at it, but it's something to strive for, and not something I see when I watch people build up dependencies upon mocking and IoC frameworks.

Introduction Of New Dependencies Is Not Clean IoC. (Or, "Just Say 'No!' To IWindsor!")

Here's where I'm most annoyed by what little I've seen of IoC container toolkits: they introduce a dependency upon the IoC container toolkit. My first exposure to an IoC container in the workplace (as opposed to looking at someone else's solution such as Rob Conery's videos) was an awfully ugly one. Legacy code that I already despised but had been more or less stablized over years of maintenance and hand-holding, now having partial implementations of integration with Castle Windsor, was now throwing bizzare exceptions I'd never seen before. The exceptions being thrown by Castle were, to my untrained eye, utterly incomprehensible. This was while we tried to scramble to update the codebase so that my latest changes could get rolled out. I was told to add huge blocks of configuration settings in the web.config file. Adding those config file changes, now all kinds of new errors were showing up. "Oops. I guess I'm not done yet", the architect muttered. Knowing it could be days before he could be "done", and I needed to roll out my changes by the end of the day, I decided to roll back to the earlier legacy codebase that had no dependency upon Castle.

Since then, they've cleaned it up and rolled out the revisions with Castle dependency, which is great (and I've since quit) but when it comes to inversion of control I still don't understand why everything must become more brittle and more heavily dependent upon third party frameworks, particularly when the project needing IoC exists as a standalone set of business objects. Ask yourself, "What direct correlation do my business objects have with this massive IoC subsystem? Do I want to introduce kludge to this, or should my business objects be as pure as I can make them, in the interest of the original premise of IoC, which is to make the objects or functionality as completely agnostic to, but cooperative with, external projects and libraries as possible?" Or at least, that's what I think the objectives should be with IoC.

Meanwhile, I saw a video on Rob Conery's blog where he was exploring the utilization of an IoC framework, and I still have myself asking the question, why? You already have a "know it all" mapping class, why not just map things together with basic event handlers?

C# Already Has An IoC Framework. It's Called The "Event".

Dependency injection and IoC are not rocket science, or at least they shouldn't be. Theoretically, if something needs something from someone else, it should ask its runtime environment, "Um, hey can someone please handle this? I don't know how." That's why events exist in .NET.

Events in .NET are often misunderstood. They're usually only used for UI actions -- someone clicks on a button and event gets raised. Certainly, that's how the need for events started; Windows is an events-driven operating system, after all. Nearly all GUI applications in Windows rely on event pumps.

In my opinion, Java's Spring Framework, which is perhaps the most popular dependency injection framework on the planet, became popular because Java didn't have the same eventing subsystem that .NET enjoys. It's there, but, originally at least (I haven't done much with Java since 1999), it wasn't very versatile. And likewise, in my opinion, the .NET world has begun to adopt Spring.NET and Castle Windsor because non-.NET languages have always needed some kind of framework in order to manage dependency injection, and so the Morts are getting "trained in" on "the way things are done, the comp-sci way". On this, I think the ALT.NET community might've gotten it wrong. 

If you think of .NET events as being classic IoC containers instead of GUI handlers, suddenly a whole new world begins to open up.

In the GUI realm, the scenario has changed little over the last decade. When a mouse moves and a button clicks, something happens in IoC space that should sound familiar:

  1. The mouse driver doesn't know if something is going to handle the circumstances, but it is going to raise the event anyway.
  2. Windows passes the event along to the GUI application. It doesn't know if the application is going to handle the circumstances, but it is going to raise the event anyway.
  3. The application passes the event along to the control that owns the physical space of the screen coordinance of the mouse. It doesn't know if the control is going to handle the circumstances, but if the control has subscribed to the event, it is going to raise the event anyway.
  4. The control is a composite control that has a button control at the mouse's coordinates. It gets "clicked", and another event gets raised, which is the click event of a button control. The raising of the event is done internally in .NET, and .NET doesn't know how the event is going to be handled, but if the application has subscribed to the button control's event handler, it raises the event anyway.
  5. The application processes the button control's click event and invokes some kind of business functionality, such as "PayCustomer()".

In this way, Windows and the application became an IoC mapping subsystem that mapped mouse coordinates and actions to particular business functionality.

If you apply the same principle to non-GUI actions, and use events in C# interfaces for everyday object member stubs, and follow an events-driven IoC pattern, you can easily find the mapping of disperate functionality to be pluggable and fully usable as first-class functionality baked right into C#.

Events And Event Handlers Are Truly Dependency-Neutral

Event handlers are just the dynamic invocation of delegates. There are no other significant qualifications for events or event handlers, other than the obvious:

  • The events themselves must have accessors (public, protected, etc.) that enable the handler to "see" the event in the first place.
  • The raised event's argument types and the return value type might require external library reference inclusions in the event handling project.
    • On the flip side the event handler can offer dependencies that the event raiser knows nothing about, which is where dependency injection comes into play.

Most people reading this already know all about how events and delegates in C# work, but I think it's very possible that C# programmers have had difficulty putting two and two together and discovering the versatility of event delegation. I suppose it's possible that some things got assumed that shouldn't have been.

Incorrect Assumption #1: Events Should Only Be Used For GUI Conditions

The fact is you can use events pretty much anywhere you want. Got some arcane business object that "does something" now and then? You can use an event there. Seriously. But you knew that.

Incorrect Assumption #2: Events Should Strictly Adhere To .NET's Pattern Of Event Handler Signatures

The common event handler signature that Microsoft uses in .NET 2.0 is seen in the EventHandler delegate:

delegate void EventHandler (object sender, EventArgs e);

This is actually not a desirable signature for business objects. It's fine in GUI applications, and, while a little awkward, it's heavily used in ASP.NET Web Forms. But it's definitely not very useful in specifically-crafted class libraries where handler behaviors are assumed and sometimes required.

You do not need to pass object sender, although you probably should send the sender object anyway. Typed.

The reason why object sender exists in the standard pattern is because the event handler, being a delegate, can be reused to handle events having identical signatures on behalf of multiple different objects. But if there is, by design, a one-to-one relationship between the handled object and the object's handler, the sender can be safely inferred.

That said, though, if you do pass the sender along, there's no significant need to send it as type object. Send it as the lowest common denominator of what it can be. I've wasted so much time typecasting the sender in my event handlers I'm actually annoyed that "object sender" is a pattern; in Win Forms, the lowest common denominator should be System.Windows.Forms.Control, not object; and in the Web Forms world, the lowest common denominator should be System.Web.UI.Control, not object.

You do not need to pass an EventArgs object.

EventArgs is a generic argument container object. It is, in itself, an empty object, but inheriting it and adding your own properties enables you to reuse the Microsoft .NET EventHandler delegate for strongly typed events without breaking syntax barriers. Here again, the whole premise of the original .NET design is that type casting and boxing is the easy answer for managing arguments everywhere.

But it is not a silver bullet and it is certainly not the optimal way of managing interfaces.

The fact is, you can pass anything you want into your event handlers. If you want your event handler to look like this:

delegate int SomethingHappened(MyClass sender, string someString, int someInt, List<Dah> dahs);

.. instead of this ..

delegate void SomethingHappened(object sender, SomethingHappenedEventArgs e);

.. you can.

Why would you want to? Well, for one thing, using EventArgs requires you to instantiate an EventArgs object every single time you raise this event. You could use a singleton or something but that would be atrocious design and not thread-safe.

You do not need to return void.

Some people might not actually even know this, but returning void is optional. You can return a string, or anything else. For example, if an event was named "FindAString" that expected a string as a return value, the event handler can return the string value to the event raiser.

public delegate string FindAStringHandler(object someContext);

public event FindAStringHandler FindAString;

Now here your class having the event FindAString can simply invoke the event as if it was a method.

string foundString = FindAString(this);

Obviously this will fail if FindAString was not subscribed to, but if it was, it would "just work". (If it wasn't, it would raise a NullReferenceException, which incidentally is a very poor exception choice on Microsoft's part!)

Be Wary Of Multiple Subscribers

One concern to have about returning anything other than void is that an event might have multiple subscribers. When there are multiple subscribing event handlers, the last event handler's return value is captured in the normal invocation.

For this reason, I wish that Microsoft would add a keyword on event syntax in C#: single. Some other verbiage but with the same function would be fine but the idea is that if at compile-time two event subscriptions were attempted on the same object's event at the same time in the same project then a compile-time error occur, and then at runtime if two event subscriptions were made from disperate projects on the same object's event then a runtime error would occur.

Event subscriptions are stacked like a list. They're not significantly unlike List<Delegate>, although technically that's not how they're implemented.

You can evaluate the number of subscribers by accessing the GetInvocationList().Length property of the event from the scope of the object that invokes/contains it.

public event FindAStringHandler FindAString;

public string Invoke() {
    if (FindAString != null) {
        if (FindAString.GetInvocationList().Length > 1)
            throw new InvalidOperationException("Too many subscribers to FindAString event.");
        else return FindAString(this);

    } else throw new InvalidOperationException("FindAString event not handled.");

}

This in effect manually enforces the single keyword functionality I proposed above.

That said, if you want the multiple subscribers to execute, such as to obtain the return values from each of the multiple subscribers, you can simply invoke each delegate in GetInvocationList(), one at a time.

//return FindAString(this, _params);  // returns one string
List<string> s = new List<string>();
FindAString.GetInvocationList().ToList().ForEach(delegate(Delegate d)
{
    s.Add(d.DynamicInvoke(this, _params) as string);
});
return s; // returns a list of string rather than just one string
 

This introduces kludginess, I suppose. That's about three or four more lines of code than I want to write. (Yes, I just said that! Keep in mind, using events everywhere like I'm inferring here would want an absolute minimum of code to be written for frequently needed patterns.) But then that's what utility classes and other patterns are for. You can create a generic (<T>) handler to wrap this functionality.

Interfaces Can Be Eventful, Too!!

One should not forget that interfaces can also define events. You can enforce the proper exposure of an event in an implemented object and have it use the correct signature (the correct delegate).

The only caveat I can think of with regard to interfaces and events is that there is no way to use interfaces to force an exposed event to be subscribed at compile-time; you can only enforce event handling by late-checking the delegates list at runtime, i.e. checking to see if it's null.

Putting It Together

So if IoC is handled by hand using events and event handlers, where does it go? The simple answer is on the controller. Consider the classic MVC scenario. Where else would it go? I can't imagine. MVC and event delegation are already pretty much built for each other.

In my mind, a mapping class the same as one that would be used with any third party IoC framework can be used to join the event-raising objects with their event handlers. It might optionaly reference a config file to produce the mappings, or not, whatever.

 

This diagram might be trash but I suppose it's something for the sake of discussion. Maybe consider the ability to swap out the controller with a mock controller as well. *shrug*

I decided to slap on ": Part 1" at the end of this blog entry's title because I still want to prove out how simple events and event handling can pull off the requirements of most of the standard testability scenarios that people using these other frameworks are using. But for the most part I *think* that this is a reasonable notion: Let's go back to the drawing board and simplify our codebases. What is it we're trying to do? What does C# not already have that cannot be enforced with some simple patterns already made available since C# was concieved?

kick it on DotNetKicks.com

Currently rated 5.0 by 1 people

  • Currently 5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , , , , ,

Software Development | C#


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  October 2018  >>
MoTuWeThFrSaSu
24252627282930
1234567
891011121314
15161718192021
22232425262728
2930311234

View posts in large calendar