jQuery Failed Me?

by Jon Davis 10. December 2008 19:52

I had a painful and uncomfortable epiphany today, one that should not go unnoticed. In building up a form consisting of about 30 fields, I processed all of the "submitted values restoration databinding"--that is, re-populating the fields' values after submitting the form because the form failed validation--in script, using jQuery, like so:

(Yes, this is VBScript in ASP Classic:)

	<% For Each field In MyDictionary %>
	$("#<%=field%>").val("<%=Request(field)%>");<%
	Next %>
	

This was a really simple notion, to data-bind all my fields with just three lines of code. And it worked. It worked great. Except for one problem.

With thirty fields, plus code to add highlighting and such, it took jQuery, in IE7 (yes that monstrous piece of doo-doo, Internet Explorer) the browser kept locking up for five seconds every time the page was refreshed (that is, every time the form was posted locally with invalid field values), while it waited for jQuery to find each of these fields and populate their values. Now, I'm sure I could go back and do some optimizations; for example, I could have done the loop in reverse and limited my jQuery selector to a single search, something like:

	var fieldMap = {};
	<% For Each field In MyDictionary %>
	fieldMap["<%=field%>"]="<%=Request(field)%>";<%
	Next %>
	$("input").each(function() {
$(this).val(fieldMap[$(this).attr("name")]); });

But besides the fact that my code just grew, there was another problem with the jQuery approach. Some people turn script off altogether. Ultimately I had to do what I tried so hard not to do, which was this junky markup:

	<input type="checkbox" <%If Request("foobar") = "ABC" Then %>
	checked="checked"<% End If %> name="foobar" value="ABC" />
	<input type="text" value="<%=Request("acme")%>" name="acme" /> 
	

Now all of a sudden I had messy, difficult-to-maintain code. Any time you put implementation detail on the detail, such as here having field instance named data binding, you have messy code; I prefer to go abstract as much as possible.

But I didn't see any way around this.

Generally, people don't run into this problem. They typically settle for something like Web Forms, which imposes View State and ultimately suffers from similar performance issues in different ways.

I was lucky. I was doing my ASP Classic coding in Visual Studio 2008, which I might add would be a GREAT ASP Classic IDE, except for one not-so-small problem: VS crashes every time you quit the ASP Classic debugger. :(  Anyway, Visual Studio gave me the RegEx Find-and-Replce tool, which I spent no time thinking about; as soon as I realized I had to manually inject my binding code into the HTML markup, I started doing things like this:

        Find: name\=:q
Replace With: name\=\1 value\=\"\<\%\=Request(\1)\%\>\"

I did have a lot of bad runs for the several changes I had to make (thank Microsoft for Undo->Replace All), still in the end I saved a few minutes.

The end result was instantaneous refresh. Zero delay.

But after all that, I got up and went for a nature break (too much information?) and started asking myself, wait, what just happened? Till now I've started to sing the praises of jQuery about how wonderful it is and how it is our panacea to all the world's problems and ultimately hunger and disease in Zimbabwe and elsewhere in the world will end after we all adopt jQuery.

Now I'm coming to find that DOM traversing, in general, is not something that should be done on the client for such core operations as data-binding. This had me thinking again about Aptana Jaxer. Jaxer gives you jQuery (actually, browser-compatible Javascript in general) on the server by offering a Mozilla-based browser DOM on the server and spitting out to the client the resulting DOM markup in its modified state. But would this be any more performant?

It would be more performant than IE7 at least. Mozilla's JavaScript runtime is a lot faster than IE's Javascript runtime. But it's still client-side performance before the client ever receives anything. If nothing else, it would at least be perceived performance improvement because you don't see a web page build up all around you and then you have to wait for five seconds while the browser freezes.

Even so, Jaxer is not available in the workplace, nor should I ever expect it in my case.

I ended the day realizing that my curiosity of the notion of migrating all templating and forms and view logic off the server and onto the client might not be as desirable as I thought it would be. This could be a problem, too, for ASP.NET AJAX 4.0, which intends to migrate data binding to the client (something I celebrated when I found out about it).

Does 1 Millisecond Matter?

by Jon Davis 6. July 2008 20:31

I'm casually skimming an ASP.NET book for review purposes and I came across mention of the connection factory classes in ADO.NET 2.0.

I forgot about these; I've always seen abstract, app-specific DAL base classes that get implemented with a SQL, Access, or other database-based implementation, but I've never seen anyone use DbProviderFactories.

The book claims that these factory classes provide database neutrality in instantiating a database connection, so that you can use SqlConnection but also OdbcConnection, et al, without changing or recompiling any of the codebase, "without affecting the application's performance!"

No performance hit? Is it not using reflection? I fired up Reflector to introspect these classes, namely System.Data.Common.DbProviderFactories, System.Data.Common.DbConnection, System.Data.Common.DbCommand, and System.Data.Common.DbDataReader. Reflection is used. It's fine, relflection is there for a reason, but when used in any loop it is also notoriously slow (at least 10x the invocation time of a strongly referenced invocation). I suppose if the application has a very lightweight load, it might not matter.

I wrote and ran a performance comparison test in a console app. First I just ran two near-identical methods seperately, each in a loop (1000x), one method using DbProviderFactories and one just using SqlConnection, and both using SELECT to return all rows in a single-row, 4-column table. Then I realized it would be good to measure the performance of the last run of each, because the first few runs and especially the very first run will be expectedly slower due to runtime caching and JITing.

Here's the end result:

Factory:        23739 ticks / 2ms (total @ 1000x: 2331ms)
SqlClient:      11233 ticks / 1ms (total @ 1000x: 1321ms)

Now the question becomes, does 1 millisecond difference per connection instance matter, considering how high that number's gonna go when it goes over the wire and both data load and business logic is going to increase things to anywhere from 10ms to 1000ms?

Perhaps not. There is a difference, but it is subtle. The debate is kind of like the debate about "" versus String.Empty.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , ,

Software Development | C#

Why Not Big RAM, RAM Disk, and 10 Gigabit Switches and Adapters?

by Jon Davis 9. November 2007 14:54

This is just something I've been pondering lately. I've been doing a lot of work this year with Lucene.Net (a port of Lucene, which is written in Java, to .NET) to manage a search engine. In our configuration, it uses RAMDirectory objects to retain the indexes in memory, then it searches the indexed content as though it was on disk. It takes up a lot of RAM, but it's very performant. A search query, including network load of transfering the XML-based query and the XML-based result set (over Windows Communication Framework), is typically in the range of about 0.05 seconds over a gigabit switch using standard, low-end, modern server hardware.

We don't just spider our sites with this stuff as with Google (or Nutch). We manually index our content using real field names and values per index, very similar to SQL Server tables except that you can have multiple same-name fields with different values in the same index record ("document") which is great for multiple keywords. If we could get it to properly join index on fields like in SQL Server you can join tables on fields, as well as to perform arbitrary functions or delegates as query parameters (which is DOABLE!!), we'd have ourselves something that is useful enough for us to consider throwing SQL Server completely out the window for read-only tasks and get a hundredfold performance boost. Yes, I just said, that!!

Because of the load we put on RAM, trying to keep the I/O off the SCSI adapter and limit it to the memory bus, all of this has led me to question why network and RAM capacities have not evolved nearly as fast as hard drive capacities. It seems to me that a natural and clean way of managing the performance of any high-traffic, database-driven web site is to minimize the I/O contention, period. I hear about people spending big money on redundant database servers with all these terabytes of storage space, but then only, say, 16 GB of RAM and gigabit switch. And that's fine, I guess, considering how when the scale goes much higher than that, the prices escalate out of control.

That, then, is my frustration. I want 10 gigabit switches and adapters NOW. I want 128GB RAM on a single motherboard NOW. I want 512GB solid state drives NOW. And I want it all for less than fifteen grand. Come on, industry. Hurry up. :P

But assuming that the hardware became available, this kind of architectural shift would be a shift, indeed, that would also directly affect how server-side software is constructed. Microsoft Windows and SQL Server, in my opinion, should be overhauled. Windows should natively support RAM disks. Microsoft yanked an in-memory OLE-DB database provider a few years ago and I never understood why. And while I realize that SQL Server needs to be rock-solid for reliably persisting committed database transaction to long-term storage, there should be greater design flexibility in the database configuration and greater runtime flexibility, such as in the Transact-SQL language, that determines how transactions persist (lately or atomically).

Maybe I missed stuff that's already there, which is actually quite likely. I'm not exactly an extreme expert on SQL Server. I just find these particular aspects of data service optimizations an area of curiosity.

Lucene.net: ICloneable on RAMDirectory

by Jon Davis 16. October 2007 16:04

 

http://issues.apache.org/jira/browse/LUCENENET-103

The story explains itself:

The objective for cloning was to make it more performant. The Directory copy approach was slower. For our purposes, the difference was thirty seconds for a manual RAMDirectory duplication using IndexWriter.AddIndexes(), 1299ms for Directory.Copy, versus 669ms for a deep clone, and 47.66ms for a shallow clone (and a LOT less RAM usage). We are going with a shallow clone because this is a multi-threaded server and there are thread locks all over the Lucene objects, but we don't modify a RAMDirectory once it is loaded. Rather, we rebuild the RAMDirectory in the equivalent of a cron job, then clone it across multiple threads.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , , ,

Open Source | Software Development


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  May 2018  >>
MoTuWeThFrSaSu
30123456
78910111213
14151617181920
21222324252627
28293031123
45678910

View posts in large calendar