Update on my stuff - June 6, 2008

by Jon Davis 7. June 2008 14:29

OK it's been a while since I did any real blogging lately, here's a sum-up:

  • I've still forgotten how to blog. While this is my blog and I can post whatever I want, this blog has become a high-level geek toy discovery dump rather than a software development blog. While I make no apologies for this--I can do whatever I want here--I'm trying to figure out if I should fork my blogs, one for serious software development discoveries and one for shallow geek discoveries. I've already been wishing I could add a personal blog somewhere, not here, and post thoughts and feelings about non-tech stuff. (This is something I used to do but it got carried away so I boycotted it and focused on purely geek speak here. I miss doing personal blogging though, and I feel like it would be good for me to do it again, if I'm just a bit more careful.)
     
  • I keep mentioning the LION Search Service, how we intend to open source it, and how it's been cancelled, and then I deleted that cancellation post, etc. It keeps coming back and going away and coming back again. Well part of that is just the reality of how it's being treated at the office; open sourcing LION required all-or-nothing support on the part of my teammates at the office, and I was getting really pointless "I'm building my own search service instead of yours but I'm gleaning ideas from yours" feedback from others on the team. Lately, though, they've discovered that what they were doing was accomplishing the same tasks as what I had already implemented, and in the absence of time availability they reverted and have adopted LION. So LION is back, the team has adopted it, and we're about to go live with our first faceted-searching web site using LION (a couple other web sites were already using it but without the new faceted searching). It's now a lot more official, LION is our exciting new search platform going forward.

    So,
    • LION is a nearly-sort-of-enterprise-class search engine, written in C#, currently in development, built on Lucene.NET, and its continued development is being inspired in part by the Apache Solr project (but was conceived and put into production before Solr was discovered).
      • It currently offers
        • rich, paginated queries with field and index selection,
        • Lucene.NET break-neck speeds,
        • faceted searching,
        • strong document modeling, and
        • dynamic updates (currently limited to index A/B switching but that will be improved soon).
      • It currently runs on WCF (.NET 3.0). Not BizTalk or anything otherwise weird or expensive to support. It currently does not have, but will soon have, REST and AJAX support.
      • It does not have and might not ever have the high availability "server farm" feature set that I believe Solr offers. Service stability and uptime has been a constant thorn in our sides, so rethinking and possibly rewriting portions of the service itself is going to be necessary before we put it out there for the world to consume in open source.
      • It is currently not, but soon will be, built around IIS 7. Right now it's just a standalone console application, wrapped in a Windows service.
    • We will still be open-sourcing LION, eventually. Right now it's looking like a late-summer or fall time frame.
    • LION will be "internally sourced" first, to be shared to other departments in our large and disperate company, to be evaluated and discovered first. (I think this is very wise, just something I didn't think about until the boss said to do it that way.)
    • LION has been made a team-shared ownership; since I built it on the job, there's little I can do to stop the boss from handing it to a certain other team member and telling him to own it, which would be totally unsurprising. While I wouldn't agree with the situation (I've only put my heart and soul into LION, and I'm also the oldest and most experienced person on the team, even if I have a couple areas of technical weaknesses), a situation like that is not something I can change without throwing the baby out with the bathwater, and I'll live.
        
  • I've been thinking about engaging in one or more side projects I would wholly and independently control. Given the big ideas I have, I'm still not sure I want to do one, though, as it's all a lot of work, some of it very tempting to try to bring into the workplace, and I don't want my authortitative work to be disrespected in the workplace like LION has been. Even so, the ideas are still on the table for consideration
      
    • BlogObjects - I registered BlogObjects.net back when I registered PowerBlog.net (I wrote in VB6 and rewrote in C# a commercial blogging desktop client application about five years or so ago, with zero commercial success but significant personal/professional technical growth for myself) and was considering making PowerBlog's development API a set of open-source blogging tools. I'm thoroughly persuaded that this will never be commercially viable as blogging APIs are a dime a dozen, but, like PowerBlog was five years ago, starting from scratch on a client/server blogging API would be a good refresher on technology in general. This is something that, if I do it, it will be not only open source, it will also be cross-platfom to Mac and to Linux, and will also be complemented with a PowerBlog resurrection or else PowerBlog-like alternative with fewer feature (no Active Scripting [VBScript, Javascript], no team synchronization support).
        
    • CMSObjects - Taking BlogObjects to another level, CMSObjects would build upon BlogObjects's idea (but not the codebase) and add:
      • An enterprise class CMS service
      • Rich support for the Atom Publishing standard API
      • Content typing (for example, a "Story" article type, a "Video browser", a "Photo gallery", a "Product Detail" page, etc), with scripted or compiled coded definitions that support inheritence
      • Rich templating both per content type and per content instance
      • Workflows
        • Team member security (user can post, user can only read, user can only post to a particular category or to a particular URL prefix, etc.)
        • Team members' posts can be flagged for approval, and then approved by a senior editor
        • Content can be annotated, seperately from the publshed output -- ideally, the annotations can be made inline, directly inside of the content
        • Content publishing to the web can be postponed to a digital publish date
      • Content versioning 
      • Again, completely open-sourced, but more web service oriented, possibly web-driven, and much less GUI-driven (so no PowerBlog for CMSObjects except for basic blogging to a CMSObjects service)
          
    • CRMObjects - Yeah the "Objects" suffix is getting a little silly, but my short experience with Sage Software's SalesLogix exposed me to another facet of software and business technology that reflects the genuine needs of businesses in general, which is using technology to support sales and support staff. CRM can be painfully messy and awkward, but tossing together some basic building blocks might be a fun and rewarding experience, even if it is not commercially viable (way too much competition, as with blogging). Who knows, though, businesses might actually use it, and supporting it could actually become lucrative.
        
    • ERPObjects - I'll do that by myself, in my spare time. Just kidding.
        
  • I have a book here on Cocoa programming. I'm curious about Objective-C, how it's a weakly typed, object-oriented C language superset, which just sounds weird (C being object oriented and weakly typed?! .. weird! .. but cool!!), I can see why the book comes right out and says that it can be extremely dangerous, I saw enough of this with VB6 programming's suport for weak typing and evil Variants in my past, but keeping in line with C linguistics and C power, I'm very curious about it. I'm guessing that Microsoft's answer to Objective-C is C++/CLI, but the CLI in itself is still strongy typed, so you have to look at integration points with JScript.NET or something (but then there's Lua, Python, etc., which the non-Microsoft community also has). I watched an introduction to Objective-C video amongst the iPhone SDK videos recently and, while it does look different, I can see how it as been used to give Mac and iPhone developers a lot of shortcuts into fast and efficient software development.
      
  • Over the last two or three weeks I've become less and less excited about cross-platform development libraries and APIs, such as Mono, Java, wxWidgets, Qt4, and SDL, for a few reasons:
    • While cross-platform GUI APIs pretty actually do work across platforms like they promise, their end output is a bit less predictable and/or desirable than one might expect.
      • Choosing a GUI API that uses its own UI rendering results in software that can be predicted by the developer, but it is not predictable by the user in the context of other software. Java Swing apps traditionally demonstrated this a long while back with is blueish, proprietary look and feel, and that's not what users want. Users want software that looks and feel like the software already installed on their systems. Sometimes these cross-platform APIs, like Java, come bundled with visual schemes that simulate these operating systems, but they are clearly faked and do not take advantage of the rendering APIs already offered by the core operating system.
      • Choosing a GUI API that uses the operating system's rendering and layouts (such as how the Mac uses the Aqua scheme but also puts the main menu up on the top of the screen) does not always result in predictable rendering output. Sometimes, for example, the borders or clickable "handles" of a drop-down list sticks out further to the right, above, or below the dimensions specified by the developer, or else the text rendering inside becomes unreadable because the internal padding or inner/inset "border" that contains the content is too small on one operating system while on the developer's operating system it appears fine. Sometimes there is somehow a mismatch of pixels to DPI, and the GUI code is using one unit of measure while the OS has adopted another, and so some "defensive layout programming" needed to have been written but wasn't because the developer's operating system didn't have this or that feature. And so on.
    • Java and C# are both still too slow for my liking, and other non-native languages are not mainstream enough to feel confident in building around with the knowledge that others can help carry the torch.
    • I've found that some cross-platform software applications that use generically cross-platform dependencies such as SDL are actually quite unstable. This doesn't mean that SDL et al are unstable, it just means that it is too easy for developers to build upon such a framework and not hash out platform-specific causes to application failures. There is no such thing as a silver bullet, SDL/wx/Qt/etc notwithstanding.
    • Ultimately, C and C++ (but not Objective-C) are themselves cross-platform languages, with platform-specific dependencies (and, of course, CPU-specific machine code compilation). The best thing to do, I think, is to get a handle on building C++ applications for one platform, but build things out generically enough so that platform-specific dependencis are broken off into libraries. Then, when porting the applications, you only need to port the platform libraries. This is what most cross-platform software I've seen ends up doing when they also take advantage of the features of the operating system like DirectX.
    • Another approach might be to build upon a cross-platform API first, or perhaps even a cross-platform non-native language like C#, and then port each component, piece by piece, to C++ on behalf of the native patform.
    • None of this is to say that I think wx, Qt, SDL, et al, are worthless or that I won't use them on a regular basis (if indeed I can get myself coding C/C++ apps to begin with), I'm just trying to say that there is no silver bullet and use of these libraries, rather than native Cocoa (for Mac) or MFC (et al, for Windows), would be a slight compromise for time and resources, not a magical, perfect answer for excellent software that "just works".
        
  • At the end of the day, I'm still scratching the surface and barely finding time enough to sleep, much less write the software I'm envisioning. Ugh.

Currently rated 1.3 by 3 people

  • Currently 1.333333/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , , , , ,

Pet Projects | Software Development

Lucene.net: ICloneable on RAMDirectory

by Jon Davis 16. October 2007 16:04

 

http://issues.apache.org/jira/browse/LUCENENET-103

The story explains itself:

The objective for cloning was to make it more performant. The Directory copy approach was slower. For our purposes, the difference was thirty seconds for a manual RAMDirectory duplication using IndexWriter.AddIndexes(), 1299ms for Directory.Copy, versus 669ms for a deep clone, and 47.66ms for a shallow clone (and a LOT less RAM usage). We are going with a shallow clone because this is a multi-threaded server and there are thread locks all over the Lucene objects, but we don't modify a RAMDirectory once it is loaded. Rather, we rebuild the RAMDirectory in the equivalent of a cron job, then clone it across multiple threads.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , , ,

Open Source | Software Development

Paying Attention To DinnerNow.net - Microsoft Killer App Best Practices Tutorial for Everything .NET 3.0

by Jon Davis 31. July 2007 18:01

I've been noticing my RSS feeds to Channel9 plugging some "DinnerNow" thing that sounded like some third party company who was just sharing some architectural trick up their sleeve, so I kinda shrugged it off. After all, how many software tricks are up people's sleeves at sourceforge.net or elsewhere?

But being a big new fan of ARCast.TV podcasts, I came across the Architecture Overview and Workflow piece which I didn't realize until I started listening that it was Part 1 of the DinnerNow bits. Listening to it now, I am realizing that this is something rather special.

DinnerNow, which I've only just downloaded and haven't actually seen yet but I'm hearing these guys talk about it in the podcast, is apparently NOT a Starter Kit style mini-solution like IBuySpy was (which was a dorky little shopping cart web site starter kit that Microsoft hacked together in ASP.NET back in the v1.1 days as a proof of concept).

Rather, DinnerNow is a full-blown software sample of an online restaurant food ordering web service application, one that I had wanted to build commercially for years (along with a hundred other ideas I had), that is top-to-bottom, soup-to-nuts, thorough and complete implementation of the entire solution from the servers, the restaurants, the buyer, demonstrating all of the awesome components of the latest long-released .NET Framework technologies, including:

  • Windows Communication Foundation (WCF) on the restaurant and web server
  • Windows Workflow Foundation (WF) on the restaurant and web server
  • Windows Presentation Foundation (WPF) for the restaurant (kiosk @ kitchen)
  • ASP.NET AJAX 1.0 and JSON-based synchronization
  • CardSpace for user security
  • PowerShell for things like administrative querying
  • Microsoft Management Console (MMC) for administrative querying, graphically
  • Windows Mobile / .NET Compact Framework
  • Windows Management Instrumentation (WMI)

Ron Jacobs (the ARCast host) made a good introduction for this with a statement I agree with, something along the lines of, "As architects, what we often try to do is look at someone else's software that was written successfully," and learn and discover new and/or best practices for software from it. In fact, that's probably the most important task in the learning process. If learning how to learn is an essential thing to learn in software architecture, then learn this essential point, that both understanding new technology and discovering best practices is learned by seeing it for yourself.

Update: Ugh. With many technologies being features comes many prerequisites. I had all of the above, but the setup requires everything to be exact. In other words, start with a fresh Virtual Machine with Vista 32-bit, and add the prerequisites. XP? Screwed. Win2003? Screwed. 64-bit? Screwed. Orcas Beta 2 installed? Screwed. And so on the screwing goes. An optional Virtual PC download from MSDN premium downloads with everything installed would have been nice. And I'm not hearing anything from the CodePlex forum / Issue Tracker, not a peep from its "maintainers". I think this project was abandoned. What bad timing, to put it on ARCast.tv now...

Another update: Microsoft did post a refresh build for Visual Studio 2008 Beta 2. I noticed it on CodePlex and they also started replying (for a change) to the multiple discussion threads and Issue Tracker posts on CodePlex. Among those replies were comments about 64-bit support--not supported, by design [which I knew] but they said with workarounds [which I posted multiple reports about] a 64-bit OS target build should be possible. I saw all this a few days ago; should have posted this follow-up update earlier, seems a Microsoft employee posted a comment here first. (Sorry.)

With passion comes passionate resentment when things go wrong. It's a fair give and take; on the other hand, whatever.

Game Math: Back To Basics

by Jon Davis 14. July 2007 21:13


The two additional books I ordered arrived earlier this week:
 

The first of the two is everything I anticipated it to be. It's chock full of reference material and sample code for virtually every typical game physics scenario one could think of. And it's totally written for coders, yet can be read by a non-coder who might know the syntax; the sample code is C++ but could just as well be elegant Java or C#. I'm very excited to have this book.

The second book just jumps right in and vomits math formulae all over the place in a game-world applicable fashion. It's a good book, too.

Unfortunately, both books (but especially the latter) are over my head as I hardly even know how to read these math notations much less comprehend them. It's time to dig higher, closer to the surface. I ordered a few more math books, this time hopefully this additional investment will pave the way for me to more appreciate the books I already bought.
 

____________________________________________________________

Meanwhile, aside from math I snagged or pre-ordered a few more API-related books:
 

And, not gaming related, I ordered:
 

Yeah I buy a lot of books. I do this with every major technology cycle and/or career cycle.

Windows Communication Framework (WCF): Beware the fake IDisposable implementation !!

by Jon Davis 23. May 2007 19:07

Yeesh. My fascination with WCF became red-faced shame overnight.

We’re using WCF client/server both on a server, so an ASP.NET web app can query a custom indexing service. Since this was a fresh project with no legacy constraints, I opted to use WCF rather than remoting to..., well, to drink the kool-aid I suppose, but I thought the argument made at the AZGroups presentation that “you shouldn’t have to worry about the plumbing” was compelling. (Now that the solution is almost fully baked, I am really annoyed I went down this path simply because of the hassle I went through in having to manually populate the original strong types in a shared codebase between client and server. IMO, DataContract-driven proxy code is only useful for third parties.)

An initial WCF implementation with a simple loop of create, invoke, and drop scope a WCF client that used named pipes to a WCF service was freezing up after 12 iterations. Executing manually, roughly one iteration per second, it froze up on the 50th or so iteration.

Turned out I wasn’t calling Close() and should have been. *blush* Of course. But I looked for Dispose() to see if I could use the using() statement, and it wasn’t there. Or, wasn’t explicit, one must cast to IDisposable first before calling its Dispose() method.

Fixing that, now I was getting exceptions on Close() / Dispose() if the server had returned a Fault message. Buried deep in the far back of the WCF book I’m reading--and actually I had to use Reflector to figure this out before I looked in the book to see if I was right--is a brief mention not to use the using() statement with WCF clients, and don’t call Dispose(), either, but to call Close() manually. Dispose() on WCF clients actually call Close() internally. But just don’t expect the CLR / compiler to pick that up, and you shouldn’t always call Close(), either, but rather Abort(). Confused yet?

As I posted in Microsoft.public.windows.developer.winfx.indigo,

IDisposable was always percieved to be the happy, safe haven for getting rid of objects that use unmanaged resources. If something implemented IDisposable, Dispose() was always callable. Not so anymore.

((IDisposable)client).Dispose() can only be called on a WCF client if Close() can be called, because internally it calls Close(). Close() cannot be called unless basically it's in the Open state; otherwise, you have to execute Abort() instead, which is not a memeber of IDisposable. This means that, even though the object does indeed implement IDisposable, its *SUPPORT* for IDisposable is 100% dependent upon the caller evaluating the State of the object to determine whether or not it's open. In other words, Microsoft has established a new precedent: IDisposable mandates extraneous state-checking code before its IDisposable implementation is usable, and the only thing you can do about it is wrap it.

I might've opted to create a new interface, IReallyDispose, but then I'd still have to implement it. I could create an abstract class, WcfDisposable, but C# doesn't support multiple inheritance. The best I can do is put a sticky note on my computer monitor that reads: "WCF client objects don't REALLY implement IDisposable unless they're Open!" Then I can only hope that I'll pay attention to my stickynote when I'm going about WCF coding.

Does anyone else besides me find this to be unacceptably stupid and messy? I really *WANT* to like WCF. I love greenfield projects that use promising new technology, but when new technology abandons key design patterns like this, it really gets under my skin.

Discussing the matter further, ..

This isn't about the object not being able to Close(). I don't mind Close() raising exceptions. The core problem is that IDisposable throws an exception just because the object is in a "Faulted" state, while the object retains unmanaged resources!! IDisposable is generic and agnostic to connections/sockets/pipes/channels/streams, so I disagree when most people say "Dispose() and Close() are one and the same", because they're not. What Dispose() is supposed to do is safely unload unmanaged resources, whether that means to Close() or not. WCF shouldn't implement IDisposable if IDisposable.Dispose() will ever throw exceptions. I don't care if Dispose() calls Close(), it should wrap that call with ...

void IDisposable.Dispose()
{
if (this.State == CommunicationState.Closing ||
this.State == CommunicationState.Closed ||
this.State == CommunicationState.Faulted)
{
this.Abort();
}
else
{
this.Close();
}
}

Instead, Reflector says it's implemented as such:

	void IDisposable.Dispose()
	{
	this.Close();
	}
	

Since IDisposable has compile-time support for managing resources with Dispose, including the using() statement, this implementation is garbage.

There should be a working IDisposable.Dispose() that clears out unmanaged resources if you are *NOT* working in a transaction and have nothing to "abort" except the open connection itself. IMO, outside of a transaction, disposal of any object is an "abortion".

The bug in the design isn't just faulty Dispose(), but that IDisposable was implemented in the first place. The practice we are told to use is to ignore it, and to call Close() or Abort() ourselves. Therefore, it's not disposable, it's only Closable/Abortable, depending on state. Why, then, did they implement IDisposable?

Where does Microsoft stand on this? Well, according to this forum post [link], they couldn't figure out what to do themselves, so they released it with it with no real solution. Literally, "for good or for ill we have landed where we have", which was to try{} to Close, catch{} to Abort. Oh, nice planning. My respect for Microsoft just went down about 50 points.

Currently rated 5.0 by 3 people

  • Currently 5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , ,

Software Development


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  September 2018  >>
MoTuWeThFrSaSu
272829303112
3456789
10111213141516
17181920212223
24252627282930
1234567

View posts in large calendar