Announcing Fast Koala, an alternative to Slow Cheetah

by Jon Davis 17. July 2015 19:47

So this is a quick FYI for teh blogrollz that I have recently been working on a little Visual Studio extension that will do for web apps what Slow Cheetah refused to do. It enables build-time transformations for both web apps and for Windows apps and classlibs. 

Here's the extension: https://visualstudiogallery.msdn.microsoft.com/7bc82ddf-e51b-4bb4-942f-d76526a922a0  

Here's the Github: https://github.com/stimpy77/FastKoala

Either link will explain more.

Automatically Declaring Namespaces in Javascript (namespaces.js)

by Jon Davis 25. September 2012 18:24

Namespaces in Javascript are a pattern many untrained or undisciplined developers may fail to do, but they are an essential strategy in retaining maintainability and avoiding collisions in Javascript source.

Part of the problem with namespaces is that if you have a complex client-side solution with several Javascript objects scattered across several files but they all pertain to the same overall solution, you may end up with very long, nested namespaces like this:

var ad = AcmeCorporation.Foo.Bar.WidgetFactory.createWidget('advertisement');

I personally am not opposed to long namespaces, so long as they can be shortened with aliases when their length gets in the way.

var wf = AcmeCorporation.Foo.Bar.WidgetFactory;
var ad = wf.createWidget('advertisement');

The problem I have run into, however, is that when I have multiple .js files in my project and I am not 100% sure of their load order, I may run into errors. For example:

// acme.WidgetFactory.js
AcmeCorporation.Foo.Bar.WidgetFactory = {
createWidget: function(e) {
return new otherProvider.Widget(e);
}
};

This may throw an error immediately because even though I’m declaring the WidgetFactory namespace, I am not certain that these namespaces have been defined:

  • AcmeCorporation
  • AcmeCorporation.Foo
  • AcmeCorporation.Foo.Bar

So again if any of those are missing, the code in my acme.WidgetFactory.js file will fail.

So then I clutter it with code that looks like this:

// acme.WidgetFactory.js
if (!window['AcmeCorporation']) window['AcmeCorporation'] = {};
if (!AcmeCorporation.Foo) AcmeCorporation.Foo = {};
if (!AcmeCorporation.Foo.Bar) AcmeCorporation.Foo.Bar = {};
AcmeCorporation.Foo.Bar.WidgetFactory = {
createWidget: function(e) {
return new otherProvider.Widget(e);
}
};

This is frankly not very clean. It adds a lot of overhead to my productivity just to get started writing code.

So today, to compliment my using.js solution (which dynamically loads scripts), I have cobbled together a very simple script that dynamically defines a namespace in a single line of code:

// acme.WidgetFactory.js
namespace('AcmeCorporation.Foo.Bar');
AcmeCorporation.Foo.Bar.WidgetFactory = {
createWidget : function(e) {
return new otherProvider.Widget(e);
}
};
/* or, alternatively ..
namespace('AcmeCorporation.Foo.Bar.WidgetFactory');
AcmeCorporation.Foo.Bar.WidgetFactory.createWidget = function(e) {
return new otherProvider.Widget(e);
};
*/

As you can see, a function called “namespace” splits the dot-notation and creates the nested objects on the global namespace to allow for the nested namespace to resolve correctly.

Note that this will not overwrite or clobber an existing namespace, it will only ensure that the namespace exists.

a = {};
a.b = {};
a.b.c = 'dog';
namespace('a.b.c');
alert(a.b.c); // alerts with "dog"

Where you will still need to be careful is if you are not sure of load order then your namespace names all the way up the dot-notation tree should be namespaces alone and never be defined objects, or else assigning the defined objects manually may clobber nested namespaces and nested objects.

namespace('a.b.c');
a.b.c.d = 'dog';
a.b.c.e = 'bird';
// in another script ..
a.b = { 
c : {
d : 'cat'
}
};
// in consuming script / page
alert(a.b.c); // alerts [object]
alert(a.b.c.d); // alerts 'cat'
alert(a.b.c.e); // alerts 'undefined'

Here’s the download if you want it as a script file [EDIT: the linked resource has since been modified and has grown significantly], and here is its [original] content:

function namespace(ns) {
var g = function(){return this}();
ns = ns.split('.');
for(var i=0, n=ns.length; i<n; ++i) {
var x = ns[i];
if (x in g === false) g[x]={}; 
g = g[x];
}
} 

The above is actually written by commenter "steve" (sjakubowsi -AT- hotmail -dot-com). Here is the original solution that I had come up with:

namespace = function(n) {
var s = n.split('.');
var exp = 'var ___v=undefined;try {___v=x} catch(e) {} if (___v===undefined)x={}';
var e = exp.replace(/x/g, s[0]);
eval(e);
for (var i=1; i<s.length; i++) {
var ns = '';
for (var p=0; p<=i; p++) {
if (ns.length > 0) ns += '.';
ns += s[p];
}
e = exp.replace(/x/g, ns);
eval(e);
}
}

Currently rated 4.0 by 2 people

  • Currently 4/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , ,

Javascript | Pet Projects

Changes Are Coming

by Jon Davis 6. July 2011 23:30

Well, it's been a wonderful ride, nearly half a decade working with BlogEngine.net's great blogging software. But it's time to move on.

Orchard, it was very nice to meet you. You have a wonderful future ahead of you, and I was honored to have known you, even just a little. Unfortunately, you and I are each looking for something different. 

WordPress, you are like a beautiful, sexy whore, tantalizing on the outside and known by everybody and his brother, but quite honestly I'm not sure I want to see you naked more than I already have.

I'm frickin' Jon Davis, I've been doing software and web development for 14 nearly 15 years now, and doggonit I should assume myself to be "all that" by now. Actually, blog engines should be like "Hello World" to me by now. I suppose the only reason why I've been too shy to do it thus far is because the first time I started building a complete blogging solution eight or nine years ago and stopped its continuance six or seven years ago the thing I built was proven to be an oddball hunk of an over-programmed desktop application that I had primarily leveraged to grow broad technical talents. It was a learning opportunity, not a proper blogging solution, and it smelled of adolescence. (To this day it won't even compile because .NET 2.0 broke it.)

In the mean time, I've moved on. I've been an employer-focused career guy for the last five or six years, having little time for major projects like that, but still growing both in technical skill set and in broad understanding of Internet markets and culture.

But I kind of miss blogging. I used to be a prolific blogger. I sometimes browse my blog posts from years ago and find some interesting tidbits of knowledge, in fact sometimes I actually learn from my prior writings because I later forget the things I had learned and blogged about but come back to re-learn them. Sometimes, meanwhile, I'll find some blog posts that are a little bizarre--thoughtful in prose, yet ridiculous in their findings. That's okay. My goal is to get myself to think again, and not be continuously caught up in a daily grind whereby neither my career nor technically-minded side life have any meaning.

Last weekend over two or three days I created a new blog engine. (Anyone who knows me well knows that I've been tinkering with social platform development on my own time for some years, but this one was from-scratch.) I successfully ported all of my blog posts and blog comments from my BlogEngine.net to my new engine and got it to render in blog form using my own NUnit-tested ASP.NET MVC implementation. I would have replaced BlogEngine.net here on my site with my blog engine already, were it not for the fact that as I used Entity Framework Code First I ran into snags getting the generated database schema to correctly align with long-term strategies. And as much as I'd be delighted to prove out my ability to rush a new blog engine out the door, I don't necessarily want to rush a database schema, especially if I intend to someday share the schema and codebase with the world.

And I never said I was going to open-source this. I might, but I also want to commercialize it as a hosted service. I'll likely do both.

But it's coming, and here are my dreamy if possibly ridiculous plans for it:

 

  1. Blogging with comments and image attachments. Nothing special here. But I want to support using the old-skool MetaWeblog API, so that'll definitely be there, as well as the somewhat newer AtomPub protocol.
  2. Syndication with RSS and Atom. Again, nothing special here.
  3. As a blogging framework it will be a WebMatrix-ready web site (not web application). Even though it will use ASP.NET MVC it will be WebMatrix gallery-loadable and Notepad-customizable. The controllers/models will just be precompiled. Note that this is already working and proven out; the depth and detail of customizability (such as a good file management pattern for having multiple themes preinstalled) have not been sorted out yet, though.
  4. AppHarbor-deployable. AppHarbor is awesome! Everything I'm doing here is going to ultimately target AppHarbor. Right now the blog you're looking at is temporarily hosted on a private server, but I want that to end soon as this server is flaky.
  5. Down-scalable. I am prototyping this with SQL Server Compact Edition 4.0, with no stored procedures. Once the project begins to mature, I'll start supporting optimizations for upwards-scalable platforms like SQL Server with optimized stored procedures, etc., but for now flexibility for the little guy who's coming from WordPress to my little blog engine is the focus.
  6. Phase 1 goal: BlogEngine.net v1.4.5.0 approximate feature equivalence (minus prefab templates and extra features I don't use). BlogEngine.net is currently at v2.x now, and I haven't really even looked much at v2.x yet, but as of this blog post I'm currently still using v1.4.5.0 and rather than upgrade I just want to swap it out with something of my own that does roughly the same as what BlogEngine.net does. This includes commenting, categories, widgets, a solid blog editor, and strong themeability; I won't be creating a lot of prefab themes, but if I'm going to produce something of my own I want to expose at least the compiled parts of it to others to reuse, and I'm extremely picky about cleanliness of templates such that they can be easily updated and CSS swappages with minimal server-side code changes can go very far.
  7. Phase 2 goal: Tumblr approximate feature equivalence (minus prefab templates). All I mean by this is that blog posts won't just be blog posts, they'll be content item posts of various forms--blog posts, microblog posts, photo posts, video posts, etc. Still browsable sorted descending by date, but the content type is swappable. In my current implementation, a blog entry is just a custom content type, and blogs are declared in an isolated class library from the core content engine. I also want to support importing feeds from other sources, such as RSS feeds from Flickr or YouTube. Tumblr approximate equivalence also means being mobile-ready. Tumblr is a very smartphone-friendly service, and this is going to be a huge area of focus.
  8. Phase 3 goal: WordPress approximate equivalence (minus prefab templates). Yeah I know, to suggest WordPress equivalence after already baking in something of a BlogEngine.net and Tumblr functionality equivalence, this is sorta-kinda a step backwards on the content engine side. But it's a huge step forward in these areas:
    • Elegance in administration / management .. the blogger has to live there, after all
    • Configurability - WordPress has a lot of custom options, making it really a blog on steroids
    • Modularity - rich support for plug-ins or "modules" so that whether or not many people use this thing, whoever does use it can take advantage of its extensibility
    • Richer themeability - WordPress themes are far from CSS drops, they are practically engine replacements, but that is as much its beauty as it is its shortcoming. You can make it what you want, really, by swapping out the theme.
Non-goals include creating a full-on CMS. I have no interest in trying to build something that competes directly with Orchard, and frankly I think Orchard's real goals are already met with Umbraco which is a fantastic CMS. But Umbraco is nothing like WordPress, WP is really just a glorified blog engine. If anything, I want to compete a little bit with WordPress. And I do think I can compete with WordPress better than Orchard does; even though Orchard seems to be trying to do just that (compete with WordPress), its implementation goals are more in line with Umbraco and those goals are just not compatible because WordPress is a very focused kind of application with a very specific kind of content management.
 
And don't worry, I don't ever actually think I could ever literally compete with WordPress as if to produce something better. I for one strongly believe that it's completely okay to go and build yet another mousetrap, even if mine is of lesser ideals compared to the status quo. There's nothing wrong with doing that. People use what they want to use, and I don't like the LAMP stack nor PHP all that much, otherwise I'd readily embrace WordPress. Then again, I'd probably still create my own WordPress after embracing WordPress, perhaps just like I am going to create my own BlogEngine.net after embracing BlogEngine.net.

 

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

ASP.NET | Blog | Pet Projects

TCP proxying on Linux

by Jon Davis 28. April 2011 10:59

Several months ago I cobbled together a port-modified TCP proxy that runs on the .NET CLR. My intention was to make this usable both in Windows and on *nix systems with Mono. I haven't used it much, though, certainly not in commercial production apps.

However, it appears that *nix already has a couple solutions already in place:

iptables: http://www.debian-administration.org/articles/595 

Perl: http://search.cpan.org/~acg/tcpforward-0.01/tcpforward 

There's also SSH tunneling, which is interesting. 

http://www.revsys.com/writings/quicktips/ssh-tunnel.html 

http://www.stunnel.org/ 

I'm just short of content, however. As a future change I wanted, and still want, to detect HTTP traffic and to hijack the HTTP headers with an insertion of a custom HTTP header indicating the source IP address. I had done this previously using Apache's proxy but I was hoping to make HTTP detected rather than assumed. I'll see if I ever get around to it.

Gemli.Data: Basic LINQ Support

by Jon Davis 6. June 2010 04:13

In the Development branch of Gemli.Data, I finally got around to adding some very, very basic LINQ support. The following test scenarios currently seem to function correctly:

var myCustomEntityQuery = new DataModelQuery<DataModel<MockObject>>();
// Scenarios 1-3: Where() lambda, boolean operator, method exec
var linqq = myCustomEntityQuery.Where(mo => mo.Entity.MockStringValue == "dah");
linqq = myCustomEntityQuery.Where(mo => mo.Entity.MockStringValue != "dah");
// In the following scenario, GetPropertyValueByColumnName() is an explicitly supported method
linqq = myCustomEntityQuery.Where(mo=>((int)mo.GetPropertyValueByColumnName("customentity_id")) > -1);
// Scenario 4: LINQ formatted query
var q = (from mo in myCustomEntityQuery
where mo.Entity.MockStringValue != "st00pid"
select mo) as DataModelQuery<DataModel<MockObject>>;
// Scenario 5: LINQ formatted query execution with sorted ordering
var orderedlist = (from mo in myCustomEntityQuery
where mo.Entity.MockStringValue != "def"
orderby mo.Entity.MockStringValue
select mo).ToList();
// Scenario 6: LINQ formatted query with multiple conditions and multiple sort members
var orderedlist = (from mo in myCustomEntityQuery
where mo.Entity.MockStringValue != "def" && mo.Entity.ID < 3
orderby mo.Entity.ID, mo.Entity.MockStringValue
select mo).ToList();

This is a great milestone, one I’m very pleased with myself for finally accomplishing. There’s still a ton more to do but these were the top 50% or so of LINQ support scenarios needed in Gemli.Data.

Unfortunately, adding LINQ support brought about a rather painful rediscovery of critical missing functionality: the absence of support for OR (||) and condition groups in Gemli.Data queries. *facepalm*  I left it out earlier as a to-do item but completely forgot to come back to it. That’s next on my plate. *burp*

Gemli.Data v0.3.0 Released

by Jon Davis 12. October 2009 03:57

I reached a milestone this late night with The Gemli Project and decided to go ahead and release it. This release follows an earlier blog post describing some syntactical sugar that was added along with pagination and count support, here:

http://www.jondavis.net/techblog/post/2009/10/05/GemliData-Pagination-and-Counts.aspx

Remember those tests I kept noting that I still need to add to test the deep loading and deep saving functionality? No? Well I remember them. I kept procrastinating them. And as it turned out, I’ve discovered that a lesson has been taught to me several times in the past, and again now while trying to implement a few basic tests, a lesson I never realized was being taught to me until finally just this late night. The lesson goes something like this:

If in your cocky self-confidence you procrastinate the tests of a complex system because it is complex and because you are confident, you are guaranteed to watch them fail when you finally add them.

Mind you, the “system” in context is not terribly complex, but I’ll confess there was tonight a rather ugly snippet of code to wade through, particularly while Jay Leno or some other show was playing in the background to keep me distracted. On that latter note, it wasn’t until I switched to ambient music that I was finally able to fix a bug that I’d spent hours staring at.

This milestone release represents 130 total unit tests since the lifetime of the project (about 10-20 or so new tests, not sure exactly how many), not nearly enough but enough to feel a lot better about what Gemli is shaping in itself. I still yet need to add a lot more tests, but what these tests exposed was a slew of buggy relationship inferences that needed to be taken care of. I felt the urgency to release now rather than continue to add tests first because the previous release was just not suitably functional on the relationships side.

 

Gemli.Data: Pagination and Counts

by Jon Davis 5. October 2009 03:54

It occurred to me that I’m not blogging enough about how Gemli is shaping, other than mentioning the project on the whole in completely different blog posts.

In the dev branch I’ve dropped the abstract base DataModelQuery class (only DataModelQuery<TModel> remains) and replaced it with an IDataModelQuery interface. This was a long-needed change, as I was really only using it for its interfaces anyway. It was being used because I didn’t necessarily know the TModel type until runtime, namely in the DataProviderBase class which handles all the deep loads with client-side-joins-by-default behavior. But an interface will work for that just as well, and meanwhile the query chaining syntax behavior was totally inconsistent and in some cases useless.

Some recent changes that have been made in the dev branch include some more syntactical sugar, namely the support for loading and saving directly from a DataModel/DataModel<T> class or from a DataModelQuery<TModel> plus support for pagination and getting record counts.

Quick Loads

From DataModel:

List<MyPoco> collectionOfMyPoco = DataModel<MyPoco>.LoadAll().Unwrap<MyPoco>();

There are also Load() and LoadMany() on DataModel, but since they require a DataModelQuery object as a parameter, and DataModelQuery has new SelectFirst() and SelectMany() that do the same thing (forward the task on to the DataProvider), I’m not sure they’re needed and I might just delete them.

From DataModelQuery:

List<MyPoco> collectionOfMyPoco = DataModel<MyPoco>.NewQuery()
    .WhereProperty["Amount"].IsGreaterThan(5000m)
    .SelectMany().Unwrap<MyPoco>();

Pagination

Since Gemli is being built for web development, pagination support with pagination semantics (“page 3”, rather than “the rows between rows X and Y”) is a must. There are many things besides data grids that require pagination—actually, pretty much any list needs pagination if you intend not to show as many items on a page as there are in a database table.

Client-side pagination is implemented by default, but only if DB server-side pagination is not handled by the data provider. I still intend to add DB-side pagination for SQL Server.

// get rows 61-80, or page 4 @ 20 items per page, of my filtered query
var myListOfStuff = DataModel<Stuff>.NewQuery()
    .WhereProperty["IsActive"].IsEqualTo(true)
    .Page[4].OfItemsPerPage(20)
    .SelectMany().Unwrap<Stuff>();

To add server-side pagination support for a proprietary database, you can inherit DbDataProvider and override the two variations of CreateCommandBuilder() (one takes a DataModel for saving, the other takes a DataModelQuery for loading). The command builder object has a HandlePagination property that can be assigned a delegate. The delegate would then add or modify properties in the command builder that inject appropriate SQL text into the command.

Count

Support for a basic SELECT COUNT(*) FROM MyTable WHERE [..conditions..] is a pretty obvious part of any minimal database interaction library (O/RM or what have you). For this reason, GetCount<TModel>(DataModelQuery query) has been added to DataProviderBase, and in DbDataProvider it replaces the normal LoadModel command builder generated text with SELECT COUNT(*)…  The DataModelQuery<TModel> class also has a new SelectCount() method which forwards the invocation to the DataProvider.

long numberOfPeepsInMyNetwork = DataModel<Person>.NewQuery()
    .WhereProperty["Networkid"].IsEqualTo(myNetwork.ID)
    .SelectCount();

All of these changes are still pending another alpha release later. But if anyone wants to tinker it’s all in the dev branch in source code at http://gemli.codeplex.com/ .

LINQ May Be Coming

I still have a lot of gruntwork to do to get this next alpha built, particularly in adding a lot more tests. I still haven’t done enough testing of deep loads and deep saves and I’m not even confident that deep saves are working correctly at the basic level as I haven’t yet used them. Once all of that is out of my way, I might start looking at adding LINQ support. I’m not sure how awfully difficult LINQ support will prove to be yet, but I’m certain that it’s doable.

One of the biggest advantages of LINQ support is strongly typed member checking. For example, in Gemli, you currently have to declare your WHERE clauses with a string reference to the member name or the column name, whereas with LINQ the member name can be referenced as strongly-typed code. As far as I know, it does this by way of syntactic sugar that ultimately boils down to delegates that literally work with the object directly, depending on how the LINQ support was added.

Among the other advantages of adding LINQ support might be in adding support for anonymous types. For example, right now without LINQ you’re limited to DataModels and the class structures they represent. But LINQ lets you select specific members into your return object, for example, rather than the whole enchilada.

LINQ:

// only return ID and Name, not the whole Product
var mySelection = (from p in Product
                   select p.ID, p.Name).ToList();

It’s this underlying behavior that creates patterns for me to work with that I might drag into play the support for aggregate functions in SQL using terse Gemli+LINQ semantics. Right now, it’s literally impossible for Gemli to select the result of an aggregate function (other than Count(*)) unless that was wrapped in a stored procedure on the DB side and then wrapped in a DataModel on Gemli’s side. There’s also no GROUP BY support, etc. I like to believe that LINQ will help me consolidate the interface patterns I need to make all that work correctly. However, so far I’m still just one coder as no one has jumped on board with Gemli yet so we’ll see if LINQ support makes its way in at all.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

Pet Projects | Software Development

Gemli Project v0.2 Released

by Jon Davis 21. September 2009 03:57

Earlier this late night I posted the first major overhaul update of Gemli since a month ago. This brings a huge number of design fixes and new features to Gemli.Data and makes it even more usable. It's still alpha, though, as there's a lot more testing to be done, not to mention still some new features to be added.

http://gemli.codeplex.com/Release/ProjectReleases.aspx?ReleaseId=33281

From the release notes:

  • Added to DataProviderBase: SupportsTransactions { get; } , BeginTransaction(), and BeginTransaction(IsolationLevel)
  • Facilitate an application default provider: Gemli.Data.Providers.ProviderDefaults.AppProvider = myDBProvider .. This allows DataModels and DataModelCollections to automatically use the application's data provider when one hasn't already been associated with the model (so you can, for example, invoke .Save() on a new model without setting its provider manually)
  • Replace (and drop) TypeConvertor with DbTypeConverter.
  • Operator overloads on DataModelQuery; you can now use == instead of IsEqualTo(), but this comes at the cost of chainability. (IsEqualTo() is still there, and still supports chaining.)
  • Added SqlDbType attribute property support. DataType attribute value is no longer a DbType and is instead a System.Type. The properties DbType and SqlDbType have been added, and setting any of these three properties will automatically translate and populate the other two.
  • DataType attribute value is no longer a DbType and is instead a System.Type. The properties DbType and SqlDbType have been added, and setting any of these three properties will automatically translate and populate the other two.
  • WhereMappedColumn is now WhereColumn
  • Facilitate optional behavior of inferring from all properties not marked as ignore, rather than the behavior or inferring from none of the unattributed properties unless no properties are attributed. The latter behavior--the old behavior--currently remains the default. The new behavior can be applied with DataModelTableMappingAttribute's PropertyLoadBehavior property
    • This may change to where the default is InferProperties.NotIgnored (the new behavior), still pondering on this.
  • Add sorting to MemoryDataProvider
  • TempMappingTable is now DataModelMap.RuntimeMappingTable
  • ForeignDataModel.ForeignXXXX was backwards and is now RelatedXXXX (multiple members)
  • DataModelFieldMappingAttribute is now DataModelColumnAttribute.
  • DataModelMap.FieldMappings is now DataModelMap.ColumnMappings.
  • DataModelTableMappingAttribute is now DataModelTableAttribute.
  • DataModelForeignKeyAttribute is now ForeignKeyAttribute.
  • The XML serialization of all elements now follows these renamings and is also now camelCased instead of ProperCase.
  • XML loading of mapping configuration is getting close, if it isn't already there. Need to test.
  • Added DataModelMap.LoadMappings(filePath)

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

C# | Cool Tools | Pet Projects | Software Development

Introducing The Gemli Project

by Jon Davis 23. August 2009 21:11

I’ve just open-sourced my latest pet project. This project was started at around the turn of the year, and its scope has been evolving over the last several months. I started out planning on a framework for rapid development and management of a certain type of web app, but the first hurdle as with anything was picking a favorite DAL methodology. I tend to lean towards O/RMs as I hate manually managed DAL code, but I didn’t want to license anything (it’d mean sublicensing if I redistribute what I produce), I have some issues with LINQ-to-SQL and LINQ-to-Entities, I find nHibernate difficult to swallow, I have some reservations about SubSonic, and I just don’t have time nor interest to keep perusing the many others. Overall, my biggest complaint with all the O/RM offerings, including Microsoft’s, is that they’re too serious. I wanted something lightweight that I don’t have to think about much when building apps.

So as a project in itself I decided first to roll my own O/RM, in my own ideal flavor. Introducing Gemli.Data! It’s a lightweight O/RM that infers as much as possible using reflection, assumes the most basic database storage scenarios, and helps me keep things as simple as possible.

That’s hype-speak to say that this is NOT a “serious O/RM”, it’s intended more for from-scratch prototype projects for those of us who begin with C# classes and want to persist them, and don’t want to fuss with the database too much.

I got the code functioning well enough (currently 92 unit tests, all passing) that I felt it was worth it to go ahead and let other people start playing with it. Here it is!

Gemli Project Home: http://www.gemli-project.org/ 

Gemli Project Code: http://gemli.codeplex.com/

Gemli.Data is currently primarily a reflection-based mapping solution. Here’s a tidbit sample of functioning Gemli.Data code (this comes from the CodePlex home page for the project):

// attributes only used where the schema is not inferred
// inferred: [DataModelTableMapping(Schema = "dbo", Table = "SamplePoco")]
public class SamplePoco
{
    // inferred: [DataModelFieldMapping(ColumnName = "ID", IsPrimaryKey = true, IsIdentity = true, 
    //     IsNullable = false, DataType = DbType.Int32)] // note: DbType.Int32 is SQL type: int
    public int ID { get; set; }

    // inferred: [DataModelFieldMapping(ColumnName = "SampleStringValue", IsNullable = true, 
    //     DataType = DbType.String)] // note: DbType.String is SQL type: nvarchar
    public string SampleStringValue { get; set; }

    // inferred: [DataModelFieldMapping(ColumnName = "SampleDecimalValue", IsNullable = true, 
    //     DataType = DbType.Decimal)] // note: DbType.Decimal is SQL type: money
    public decimal? SampleDecimalValue { get; set; }
}

[TestMethod]
public void CreateAndDeleteEntityTest()
{
    var sqlFactory = System.Data.SqlClient.SqlClientFactory.Instance;
    var dbProvider = new DbDataProvider(sqlFactory, TestSqlConnectionString);

    // create my poco
    var poco = new SamplePoco { SampleStringValue = "abc" };

    // wrap and auto-inspect my poco
    var dew = new DataModel<SamplePoco>(poco); // data entity wrapper

    // save my poco
    dew.DataProvider = dbProvider;
    dew.Save(); // auto-synchronizes ID
    // or...
    //dbProvider.SaveModel(dew);
    //dew.SynchronizeFields(SyncTo.ClrMembers); // manually sync ID

    // now let's load it and validate that it was saved
    var mySampleQuery = DataModel<SamplePoco>.NewQuery()
        .WhereProperty["ID"].IsEqualTo(poco.ID); // poco.ID was inferred as IsIdentity so we auto-returned it on Save()
    var data = dbProvider.LoadModel(mySampleQuery);
    Assert.IsNotNull(data); // success!

    // by the way, you can go back to the POCO type, too
    SamplePoco poco2 = data.Entity; // no typecast nor "as" statement
    Assert.IsNotNull(poco2);
    Assert.IsTrue(poco2.ID > 0);
    Assert.IsTrue(poco2.SampleStringValue == "abc");

    // test passed, let's delete the test record
    data.MarkDeleted = true; 
    data.Save();

    // ... and make sure that it has been deleted
    data = dbProvider.LoadModel(mySampleQuery);
    Assert.IsNull(data);

}

Gemli.Data supports strongly typed collections and multiple records, too, of course.

var mySampleQuery = DataModel<SamplePoco>.NewQuery()
    .WhereMappedColumn["SampleStringValue"].IsLike("%bc");
var models = dbProvider.LoadModels(mySampleQuery);
SamplePoco theFirstSamplePocoEntity = models.Unwrap<SamplePoco>()[0];
// or.. SamplePoco theFirstSamplePocoEntity = models[0].Entity;

Anyway, go to the URLs above to look at more of this. It will be continually evolving.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , ,

C# | Open Source | Software Development | Pet Projects

CAPTCHA This, Yo! - Early Alpha

by Jon Davis 14. July 2009 01:54

I've posted a mini-subproject:

http://www.CAPTCHAThisYo.com/

The site is self-explanatory. The idea is simple. I want CAPTCHA. I don't want to support CAPTCHA in my apps. I just want to drop in a one-liner snippet somewhere and call it done. I think other people share the same desire. So I now support CAPTCHA as a CAPTCHA app. I did all the work for myself so that I don't have to do that work. I went through all that trouble so that I don't have to go through the trouble .... Wait, ...

Seriously, it's not typical CAPTCHA, and it's Not Quite Done Yet (TM). It's something that'll evolve. Right now there isn't even any hard-to-read graphic CAPTCHA.

But what I'd like to do is have an ever-growing CAPTCHA questions library and, by default, have it just rotate through them randomly. The questions might range from shape detection to riddles to basic math. I'd really like to have some kind of community uploads thingmajig with ratings, so that people can basically share their own CAPTCHA solutions and they all run inside the same captchathisyo.com CAPTCHA engine. I'm just not sure yet how to pull that off.

Theoretically, I could take the approach Microsoft took when C# was initially released (long, long ago, in a galaxy far, far away), they had a cool insect sandbox game where you could write a .NET class that implements some interface and then send it up to the server and it would just run as an insect on the server. The objective of the game was to write the biggest killer/eater. I'm not sure how feasible the idea is of opening up all .NET uploads to the server, but it's something I'm pondering on.

Anyway, the concept has been prototyped and the prototype has been deployed is sound, but I still need to work out cross-site scripting limitations, bear with me. I still need to find a designer to make something beautful out of it. That said, feel free to use it and give feedback. Stand by.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

C# | Computers and Internet | Cool Tools | Pet Projects | Techniques | Web Development


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  March 2019  >>
MoTuWeThFrSaSu
25262728123
45678910
11121314151617
18192021222324
25262728293031
1234567

View posts in large calendar