IIS Subweb Applications Are Virtual Directories

by Jon Davis 21. June 2009 14:32

Microsoft never ceases to amaze me how they keep showing the most obscure error messages and support documentation for the simplest of causes.

HTTP Error 500.19 - Internal Server Error

Description: The requested page cannot be accessed because the related configuration data for the page is invalid.
Error Code: 0x80070005
Notification: BeginRequest
Module: IIS Web Core
Requested URL: http://www.mysite.com/myapp/
Physical Path: ~~
Logon User: Not yet determined
Logon Method: Not yet determined
Handler: Not yet determined
Config Error: Cannot read configuration file
Config File: ~~
Config Source:

   -1:

    0:
More Information... This error occurs when there is a problem reading the configuration file for the Web server or Web application. In some cases, the event logs may contain more information about what caused this error.
 

I was getting IIS 7 error 500.19 on Windows Server 2008 over the weekend, and when I discovered it I spent hours on this error. Google didn't help; everyone pointed to invalid XML in the web.config or in applicationHost.config, or said that there must be an invalid DLL reference in applicationHost.config, or said that I need to add the proper users (IIS_IUSRs, Network Service, IUSR) to the directory and/or web.config. None of these solutions applied. There was nothing in my Windows event logs and enabling IIS tracing produced no log files.

It turned out to be a simple cause: the physical directory as configured in Basic Settings for the application was wrong. Why Microsoft did not include this rather obvious scenario in the Help file for this error is beyond me!!

In my case, my root web was working fine, but my subwebs were not working fine and I got this error for the subweb. The subweb was an individually configured ASP.NET application. I figured that this wasn't important because the root web was just a flat HTML file, but it mattered.

What happened in my case was that a few days ago I had relocated the root web, then updated IIS to point to the new directory. All of the subweb applications, however, were treated by IIS as virtual directories, each with its own physical directory mapping. So each had the stale path. More specifically, I moved "C:\dir\www.mysite.com" to "C:\dir\mysite.com", updated IIS for my site to point to \dir\mysite.com, and left it as such. The applications under ...\mysite.com were each pointing to the stale absolute path of C:\dir\www.mysite.com\[application] instead of picking up the relative path of their parent directory.

I had to update each subweb application's Basic Settings to point to the revised path, and the 500.19 error went away.

Hope this helps others like it would've helped me.

Jon

SiteCore: Is the ultimate CMS also the ultimate ASP.NET web app in general?!

by Jon Davis 5. November 2008 20:11

 Important note: Please read this article with a grain of salt and note my follow-up post.

At my previous job as a web developer at a B2C magazine publisher, the developer team I was in was looking for CMS solutions that could be piggybacked to quickly build and sell web sites for advertisers, like microsites except being scalable to support integrating e-commerce and educational content easily. For full-out magazine content, the company had been using Krang, which is a CMS written in Perl for mySql and as such was not considerable for this initiative as we were ASP.NET developers. We looked at a number of smallish CMS systems for ASP.NET -- Umbraco, Graffiti, SubSonic web starter kit, etc. Most of these were laughable as options, even Umbraco had to be shrugged off. DotNetNuke wasn't an option because it's a prefab portal, not really a CMS, and frankly it's embarrassing (read: ugly). Sharepoint wasn't even worth considering because it, too, is a portal, and such a bear and a weird mutt of poorly integrated technologies and technology designs, plus it's an installation nightmare and we didn't have the hours it takes to get the thing installed when we knew we wouldn't likely like it. We also considered CMS's using "wiki" nomenclature, psuedo-wikis if you will. But those are not appropriate for the job.

But I recently began a job transition to another magazine publisher, this one B2B, a move that was mostly coincidence but they had the same need for a CMS--though on a larger scale, i.e. for their magazine web sites, not for microsites--but had already made the decision before I was interviewed. They'd researched many, many CMS's and considered all of them but the CMS that won in their research, hands down, was one I'd never heard of called SiteCore.


SiteCore's desktop uses a XAML-to-XHTML engine called Sheer UI to create a beautiful content editor and developer user interface.

I'm not entirely sure why I hadn't heard of SiteCore, they've actually been around since ASP.NET v1.0. But I believe it probably has to do with the fact that they've apparently gone through a very large number and extent of overhauls and evolution, such that those who evaluated SiteCore even recently might not have been here now to see what sort of monumental, breathtaking system this puppy is today.

I just completed training and am now a certified SiteCore developer. But before I took the training I was completely stoked about what was coming--the opportunity to work with this software.

SiteCore v6 is built upon ASP.NET 3.5, and it is impressively respectful of maintaining an ASP.NET-oriented developer/extensible workflow. That is to say, unlike other CMS's that try to break out of the box by using IronPython scripts (like Umbraco) or odd proprietary XML markup (SharePoint's CAML) or buried in a virtual machine image with a PHP front-end (like MindTouch Deki, yuck), SiteCore allows you to extend the CMS using your choice of ASPX pages with placeholders, ASCX controls with placeholders, C# code-only web controls, and .NET-extendable XSLT templates. Everything else, from articles to data records to drop-down options to security features to workflow states, everything else is just an "item" that is just a tree node with configurable settings. From what I can tell, on the developer & administrator end there aren't a lot of .NET 3.5 features in use right now--no LINQ, for example--but that's not the point. The point is that SiteCore makes the perfect CMS core of a site that you can extend with ASP.NET 3.5 with ease and without having to shoehorn anything because SiteCore is built to be respectful of pure ASP.NET. Even the pipelines and events are configurable and extensible, as all of the pipeline stages and event handlers are declared in a huge web.config. The web.config file also exposes niceties like Lucene.net for the content developers, and multi-site configurations. And yes, you use Visual Studio 2008 to extend a SiteCore-based site.

And these are just the fundamental aspects of what makes this product so awesome. Wait till you see the fluff!! SiteCore exposes to content developers, editors, and designers a user interface so profoundly rich that would make the Microsoft Office Live team pee in their pants. (And being that Microsoft is no stranger to SiteCore, they being a SiteCore customer, I'm sure they did exactly that.) SiteCore uses a proprietary dialect of XAML called "Sheer UI" that it converts to XHTML+Javascript to emit a gorgeous Windows XP-like desktop with a Start menu and everything, the point of which is not to 'wow' (which it does anyway) but to provide a focused workspace of tools that can be used to access CMS content, data structures ("data templates"), workflow management, security settings, and new feature development tools.

The best part about SiteCore, which I just found out about today, is that there is a free, one-user version you can download called SiteCore Xpress and play with right now.

The funny thing is, there are really very few "bad smells" with this system. The whole thing just feels right. It was designed right. It doesn't feel awkward, kludgy, or "shortcutted" in any way. If it does smell, it has more of a new car smell than a body odor smell.

They say a picture is worth a thousand words, but then a video is worth a million.

Now, be aware, the pro versions of SiteCore don't come cheap. In my opinion, you get what you pay for because this product is just so profoundly good, it's kinda like Apple's expensive products, it offers a fixed set of functions done right and no weirdness, just goodness. Where SiteCore differs from Apple, though, is in its completeness. After sitting through 3 days of training this week I kept having to pull up my jaw because the level of detail those guys went through to cover all aspects of real CMS and yet to do it cleanly without making a mess is just .. plain .. insane. I mean, just to throw some examples out there ...

  • Tweakable URL routing
  • *Real* content versioning, with a side-by-side diff tool.
  • Extensive (i.e. unlimited) multi-language support for content (and multi-lingual content editing / development in a few different prefab languages)
  • Extensive ACL-like security on a per-content-item basis
  • Field-level security on content fields!
  • Multiple inheritence for tree items!
  • Consolidated editing and publishing environment (that can be optionally staged in isolation)
  • Configurable workflows (i.e. approval processes)
  • Application-extensible -- build your own applications for the SiteCore desktop experience (for content editors to experience) using simple .ASPX files (no Sheer UI XML required, that stuff just runs the core)
  • Per-item performance metering (i.e. in a "tooltip window", this content item takes 10ms to render)
  • On-the-fly field editing while in preview mode
  • An API fully exposing everything you need to dynamically create or access CMS data
  • A nice "developer network" web site with detailed documentation and guides
  • A security model that even lets you use Active Directory!

.. the list would just go on and on.

The best part about SiteCore, which I just found out about today, is that there is a free, one-user version you can download called SiteCore Xpress and play with right now. From what I can tell, there are no other strings attached, i.e. no logo or ad requirements and no trial period, other than it only allows for one user (administrator), can only run on one server, and isn't licensed for commercial use. [More info] Unfortunately the one other down side (and possible deal-killer, but not likely) is that the Xpress version is v5.3, not v6. But at least it's v5.3 and not v5.

From what I've learned, v5.3 was about the time when SiteCore really reached a turning point and started making big in-roads to U.S. sales. (SiteCore is apparently developed in Denmark.) v5.3 being the last product update before the very recently released (Sep '08?) v6, it's still a very good product. I didn't use v5.3, but from what I know, I believe the things that are different in v5.3 vs. v6 are:

  • v5.3 is based on .NET 3.0 / ASP.NET 2.0, not .NET 3.5 / ASP.NET 3.5.
  • v5.3 has a proprietary thing called "masters", which were dropped in v6. In v5.3, new items had to be instantiated with templates+masters, but in v6 only templates are required.
  • The security subsystem might've been overhauled or tweaked out, such as its newly revised configuration UI.
  • v5.3 had huge performance improvements over previous versions. v6 probably has huge performance improvements on top of those, particularly as the core now takes advantage of ASP.NET 3.5 (if indeed it does, I don't know)
  • A ridiculous bug was fixed in v6 where in v5.3 cached content would process the layout anyway before returning the cached version (duh) which rumor is SiteCore isn't owning up to fix in v5.x.
  • v6 has a slick optional Page Editor which is basically a preview of the actual page (with formatting) but with inline editable regions that actually edit the CMS items on the fly.
  • v6 has a new validation component (to slap your content editors around a bit with a fish)
  • v6 has a new Grid Designer that's a nice addition but really isn't necessary

There's a review on v6 over here: http://www.cmswire.com/cms/web-cms/web-cms-sitecore-6-customer-driven-usability-002840.php

Anyway, this isn't a sales pitch as I'm just a new user, not a business partner (or at least, not yet! *grin*), but it's something I wanted to share. I love cool developer technology, particularly for .NET, and above all ASP.NET solutions I've seen on the market so far, open source or commercial, CMS or otherwise, I think SiteCore v6 sets the bar for all web applications to try to measure up to. Literally. And if you don't believe me, go download SiteCore Xpress and see for yourself.

kick it on DotNetKicks.com

Currently rated 5.0 by 4 people

  • Currently 5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , ,

Cool Tools | Web Development

ASP.NET AJAX 4.0: Microsoft did it! They married the DOM and client-driven data!

by Jon Davis 3. November 2008 01:03

For over a year I've been going through a discovery process, with one pet project after another, of how it might be possible to glue Javascript objects and DOM objects together and keep them synchronized with a server in two-way data binding. And, to do it in a way that was elegant, readable, terse, trivial, and blatantly obvious and sensible. I usually failed to see this as my actual objective--I sought to find solutions to more specific problems whereby this would be the ultimate ideal.

A long while back, I came up with a script library I called Sprinkle. I created a domain name for it, I mentioned it to Ajaxian.com, and they featured an article on it. Somehow, Sprinkle disappeared and was forgotten (an oversight on my part) and sprinklejs.com no longer points anywhere.

But what Sprinkle did was quite simple. It allowed you to put src="..." on any DOM element, within reason, particularly <div>'s for HTML injection and <input>'s for default values populated from URL GETs. Then I sought to make it XHTML-compliant by using the DTD extensions spec whereby I was allowed to add my own attributes (like 'src=..') to existing declared elements (like <div>). Problem was, the browsers generated junk characters at the top of the page when I did that, so I had to use script to strip off the junk characters. The whole thing became a mess, for one simple lightweight proof of concept, and I hadn't even begun to tinker with advanced HTML and 2-way binding.

At one point a co-worker and I created a client-side MVC framework where we ended up creating DOM object assignments to "client controls" that would effectively be able to manipulate their associated DOM objects as though they were properties (which indeed they were). Nothing special in itself but it was another example of where we were marrying HTML and script. Ultimately my co-worker did most of the implementation work on that. I helped do a lot of architectural design of it, but he pointed out a lot of issues we had to work through such as dealing with composite ASP.NET server controls that defined our own client control--ClientID issues, for example, in a tree of control containers, one control nested after another.

And in my previous post I brought the whole matter up again and pondered even simple one-way data binding from scratch all over again. In the end I figured (and noted) phooey, just use client-side templates, jQuery, and JSON web services. You still, however, and up with a lot of manual work in some ways.

Anyway, the Web Forms dependency in ASP.NET AJAX made ASP.NET AJAX a complete turn-off to me all this time. I think my previous post pretty well documents why Web Forms is worth hating, most notably the evil <head runat="server"> and <form runat="server"> tags and how they completely mess everything up in a client-oriented web app.

While I reserve room for skepticism, ASP.NET 4.0 makes some promises in favor of lightweight view templates to such an extent that I'm raising my eyebrows and thinking it's worth blogging about. I'm not sure where I was back in July or so when it was announced but the new 4.0 platform promises a whole new approach to building web apps for the client. With the new ASP.NET 4.0 client templates [2], XHTML is cleanly extended using XML namespaces, and databinding and HTML templating is performed using DOM API abstractions (and jQuery) rather than server-side templating logic. What this will mean in the practical sense is yet to be proven but right now I'm thinking, holy cow. The panacea I just spoke of in my previous post has finally arrived, I think.

But let's not get ahead of ourselves. Fortunately, the proposed new ways of doing things the ASP.NET AJAX way are preview downloadable and the downloads already fetchable. We should play with this thing and provide feedback.

More info, and proof that I was a little late to the party, is at: http://weblogs.asp.net/bleroy/archive/2008/07/30/using-client-templates-part-1.aspx

.. and officially here: http://quickstarts.asp.net/previews/ajax/templates/usingajaxtemplate.aspx

kick it on DotNetKicks.com

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , ,

Web Development

Eliminating Postbacks: Setting Up jQuery On ASP.NET Web Forms and Managing Data On The Client

by Jon Davis 19. October 2008 12:11

This is a follow-up to a prior post, Keys to Web 3.0 Design and Development When Using ASP.NET. Now I want to focus solely on getting jQuery and client-side data managmeent working with ASP.NET 2.0 without ASP.NET AJAX or ASP.NET MVC.

So you're stuck with Visual Studio 2005 and ASP.NET Web Forms. You want to flex your ninja skills. You can't jump into ASP.NET MVC or ASP.NET AJAX or an alternate templating solution like PHP, though. Are you going to die ([Y]/N)? N

 

Why would you use Web Forms in the first place? Well, you might want to take advantage of some of the data binding shorthand that can be done with Web Forms. For this blog entry, I'll use the example of a pre-populated DropDownList (a <select> tag filled with <option>'s that came from the database). 

This is going to be kind of a "for Dummies" post. Anyone who has good experience with ASP.NET and jQuery is likely already quite familiar with how to get jQuery up and running. But there are a few caveats that an ASP.NET developer would need to remember or else things become tricky (and again, no more tricky than is easily mastered by an expert ASP.NET developer).

Caveat #1: You cannot simply throw a script include into the head of an ASPX page.

The following page markup is invalid:

<%@ Page Language="C#" AutoEventWireup="true"  CodeFile="Default.aspx.cs" Inherits="_Default" %> 

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title></title>
    <script language="javascript" type="text/javascript" src="jQuery-1.2.6.js></script>
</head>
<body>

It's invalid because ASP.NET truncates tags from the <head> that it doesn't recognize and doesn't know how to deal with, such as <script runat="server">. You have to either put it into the <body> or register the script with the page which will then cause it to be put into body.

Registering the script with the page rather than putting it in the body yourself is recommended by Microsoft because:

  1. It allows you to guarantee the life cycle--more specifically the load order--of your scripts.
  2. It allows ASP.NET to do the same (manage the load order) of your scripts alongside the scripts on which ASP.NET Web Forms is running. Remember that Web Forms hijacks the <form> tag and the onclick behavior of ASP.NET buttons and such things, so it does know some Javascript already and needs to maintain that.
  3. When a sub-page or an .ascx control requires a dependency script, it helps to prevent the same dependency script from being added more than once.
  4. It allows controls to manage their own scripts. More on that in a moment.
  5. It allows you to put the inclusion markup into a server language context where you can use ResolveUrl("~/...") to dynamicize the location of the file according to the app path. This is very important in web sites where a directory hierarchy--with ASP.NET files buried inside a directory--is in place.

Here's how to add an existing external script (a script include) like jQuery into your page. Go to the code-behind file (or the C# section if you're not using code-behind) and register jQuery like so:

protected void Page_Load(object sender, EventArgs e)
{
    Page.ClientScript.RegisterClientScriptInclude(
        typeof(_YourPageClassName_), "jQuery", ResolveUrl("~/js/jquery-1.2.6.js"));
} 

A hair more verbose than I'd prefer but it's not awful. In the case of jQuery which is usually a foundational dependency for many other scripts (and itself has no dependencies), you might also consider putting this on an OnInit() override rather than Page_Load, but that's only if you're adding it to a control, where its lifecycle is less predictable in Page_Load() than in OnInit(), but I'll get into that shortly.

There is a way to inject a script into the <head>, such as described here: http://www.aspcode.net/Javascript-include-from-ASPNET-server-control.aspx. However, that is even more verbose, and it's not really considered "the ASP.NET Web Forms way".

If you want to use the Page.ClientScript registration methods for page script (written inline with markup), create a Literal control and put your script tag there. Then on the code-behind you can use Page.ClientScript.RegisterClientScriptBlock().

On the page:

<body>
    <form id="form1" runat="server">
    <asp:Literal runat="server" ID="ScriptLiteral" Visible="false">
    <script language="javascript" type="text/javascript">
        alert($);
    </script>
    </asp:Literal> 

Note that I'm using a hidden (Visible="false") Literal tag, and this tag is inside the <form runat="server"> tag. Which leads me to ..

Caveat #2: ASP.NET controls can only be declared inside <form runat="server">.

Alright, so then on the code-behind file (or server script), I add:

protected void Page_Load(object sender, EventArgs e)
{
    Page.ClientScript.RegisterClientScriptInclude(
        typeof(_YourPageClassName_), "jQuery", ResolveUrl("~/js/jquery-1.2.6.js"));
    Page.ClientScript.RegisterClientScriptBlock(
        typeof(_YourPageClassName_), "ScriptBlock", ScriptLiteral.Text, false);
} 

Unfortunately, ..

Caveat #3: Client script blocks that are registered on the page in server code lack Intellisense designers for script editing.

To my knowledge, there's no way around this, and believe me I've looked. This is a design error on Microsoft's part, it should not have been hard to create a special tag like: <asp:ClientScriptBlock runat="server">YOUR_SCRIPT_HERE();</asp:ClientScriptBlock>, that registers the given script during the Page_Load lifecycle, and then have a rich, syntax-highlighting, intellisense-supporting code editor when editing the contents of that control. They added a ScriptManager control that is, unfortunately, overkill in some ways, but that is only available in ASP.NET AJAX extensions, not core ASP.NET Web Forms.

But since they didn't give us this functionality in ASP.NET Web Forms, if you want natural script editing (and let's face it, we all do), you can just use unregistered <script> tags the old-fashioned way, but you should put the script blocks either inside the <form runat="server"> element and then inside a Literal control and registered, as demonstrated above, or else it should be below the </form> closure of the <form runat="server"> element. 

Tip: You can usually safely use plain HTML <script language="javascript" type="text/javascript">...</script> tags the old fashioned way, without registering them, as long you place them below your <form runat="server"> blocks, and you are acutely aware of dependency scripts that are or are not also registered.

But scripts that are used as dependency libraries for your page scripts, such as jQuery, should be registered. Now then. We can simplify this... 

Tip: Use an .ascx control to shift the hassle of script registration to the markup rather than the code-behind file.

A client-side developer shouldn't have to keep jumping to the code-behind file to add client-side code. That just doesn't make a lot of workflow sense. So here's a thought: componentize jQuery as a server-side control so that you can declare it on the page and then call it.

Controls/jQuery.ascx (complete):

<%@ Control Language="C#" AutoEventWireup="true" CodeFile="jQuery.ascx.cs" Inherits="Controls_jQuery" %> 

(Nothing, basically.)

Controls/jQuery.ascx.cs (complete):

using System;
using System.Web.UI; 

public partial class Controls_jQuery : System.Web.UI.UserControl
{
    protected override void OnInit(EventArgs e)
    {
        AddJQuery();
    } 

    private bool _Enabled = true;
    [PersistenceMode(PersistenceMode.Attribute)]
    public bool Enabled
    {
        get { return _Enabled; }
        set { _Enabled = value; }
    } 

    void AddJQuery()
    {
        string minified = Minified ? ".min" : "";
        string url = ResolveClientUrl(JSDirUrl 
            + "jQuery-" + _Version
            + minified 
            + ".js");
        Page.ClientScript.RegisterClientScriptInclude(
            typeof(Controls_jQuery), "jQuery", url);
    } 

    private string _jsDir = null;
    public string JSDirUrl
    {
        get
        {
            if (_jsDir == null)
            {
                if (Application["JSDir"] != null)
                    _jsDir = (string)Application["JSDir"];
                else return "~/js/"; // default
            }
            return _jsDir;
        }
        set { _jsDir = value; }
    } 

    private string _Version = "1.2.6";
    [PersistenceMode(PersistenceMode.Attribute)]
    public string JQueryVersion
    {
        get { return _Version; }
        set { _Version = value; }
    } 

    private bool _Minified = false;
    [PersistenceMode(PersistenceMode.Attribute)]
    public bool Minified
    {
        get { return _Minified; }
        set { _Minified = value; }
    }
}


Now with this control created we can remove the Page_Load() code we talked about earlier, and just declare this control directly.

(Add to top of page, just below <%@ Page .. %>:)

<%@ Page Language="C#" AutoEventWireup="true"  CodeFile="Default.aspx.cs" Inherits="_Default" %>
<%@ Register src="~/Controls/jQuery.ascx" TagPrefix="local" TagName="jQuery" %> 

(Add add just below <form runat="server">:)

<form id="form1" runat="server">
<local:jQuery runat="server"
    Enabled="true"
    JQueryVersion="1.2.6"
    Minified="false"
    JSDirUrl="~/js/" /> 

Note that none of the attributes listed above in local:jQuery (except for runat="server") are necessary as they're defaulted.

On a side note, if you were using Visual Studio 2008 you could use the documentation features that enable you to add a reference to another script, using "///<reference path="js/jQuery-1.2.6.js" />, which is documented here:

There's something else I wanted to go over. In a previous discussion, I mentioned that I'd like to see multiple <form>'s on a page, each one being empowered in its own right with Javascript / AJAX functionality. I mentioned to use callbacks, not postbacks. In the absence of ASP.NET AJAX extensions, this makes <form runat="server"> far less relevant to the lifecycle of an AJAX-driven application.

To be clear,

  • Postbacks are the behavior of ASP.NET to perform a form post back to the same page from which the current view derived. It processes the view state information and updates the output with a new page view accordingly.
  • Callbacks are the behavior of an AJAX application to perform a GET or a POST to a callback URL. Ideally this should be an isolated URL that performs an action rather than requests a new view. The client side would then update the view itself, depending on the response from the action. The response can be plain text, HTML, JSON, XML, or anything else.

jQuery already has functionality that helps the web developer to perform AJAX callbacks. Consider, for example, jQuery's serialize() function, which I apparently forgot about this week when I needed it (shame on me). Once I remembered it I realized this weekend that I needed to go back and implement multiple <form>'s on what I've been working on to make this function work, just like I had been telling myself all along.

But as we know,

Caveat #4: You can only have one <form runat="server"> tag on a page.

And if you recall Caveat #2 above, that means that ASP.NET controls can only be put in one form on the page, period.

It's okay, though, we're not using ASP.NET controls for postbacks nor for viewstate. We will not even use view state anymore, not in the ASP.NET Web Forms sense of the term. Session state, though? .. Maybe, assuming there is only one web server or shared session services is implemented or the load balancer is properly configured to map the unique client to the same server on each request. Failing all of these, without view state you likely have a broken site, which means that you shouldn't abandon Web Forms based programming yet. But no one in their right mind would let all three of these fail so let's not worry about that.

So I submit this ..

Tip: You can have as many <form>'s on your page as you feel like, as long as they are not nested (you cannot nest <form>'s of any kind).

Caveat #5: You cannot have client-side <form>'s on your page if you are using Master pages, as Master pages impose a <form runat="server"> context for the entirety of the page.

With the power of jQuery to manipulate the DOM, this next tip becomes feasible:

Tip: Treat <form runat="server"> solely as a staging area, by wrapping it in <div style="display:none">..</div> and using jQuery to pull out what you need for each of your client-side <form>'s.

By "a staging area", what I mean by that is that the <form runat="server"> was necessary to include the client script controls for jQuery et al, but it will also be needed if we want to include any server-generated HTML that would be easier to generate there using .ascx controls than on the client or using old-school <% %> blocks.

Let's create an example scenario. Consider the following page:

<%@ Page Language="C#" AutoEventWireup="true" CodeFile="MyMultiForm.aspx.cs" Inherits="MyMultiForm" %> 

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title></title>
</head>
<body>
    <form id="form1" runat="server">
    <div>
    
        <div id="This_Goes_To_Action_A">
            <asp:RadioButton ID="ActionARadio" runat="server" 
                GroupName="Action" Text="Action A" /><br />
            Name: <asp:TextBox runat="server" ID="Name"></asp:TextBox><br />
            Email: <asp:TextBox runat="server" ID="Email"></asp:TextBox>
        </div>
        
        <div id="This_Goes_To_Action_B">
            <asp:RadioButton ID="ActionBRadio" runat="server" 
                GroupName="Action" Text="Action B" /><br />
            Foo: <asp:TextBox runat="server" ID="Foo"></asp:TextBox><br />
            Bar: <asp:TextBox runat="server" ID="Bar"></asp:TextBox>
        </div>
        
        <asp:Button runat="server" Text="Submit" UseSubmitBehavior="true" />
    
    </div>
    </form>
</body>
</html> 

And just to illustrate this simple scenario with a rendered output ..

Now in a postback scenario, this would be handled on the server side by determining which radio button is checked, and then taking the appropriate action (Action A or Action B) on the appropriate fields (Action A's fields or Action B's fields).

Changing this instead to client-side behavior, the whole thing is garbage and should be rewritten from scratch.

Tip: Never use server-side controls except for staging data load or unless you are depending on the ASP.NET Web Forms life cycle in some other way.

In fact, if you are 100% certain that you will never stage data on data-bound server controls, you can eliminate the <form runat="server"> altogether and go back to using <script> tags for client scripts. Doing that, however, you'll have to keep your scripts in the <body>, and for that matter you might even consider just renaming your file with a .html extension rather than a .aspx extension, but of course at that point you're not using ASP.NET anymore, so don't. ;)

I'm going to leave <form runat="server"> in place because .ascx controls, even without postbacks and view state, are just too handy and I'll illustrate this with a drop-down list later.

I can easily replace the above scenario with two old-fashioned HTML forms:

<%@ Page Language="C#" AutoEventWireup="true" CodeFile="MyMultiForm.aspx.cs" Inherits="MyMultiForm" %> 

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title></title>
</head>
<body>
    <form id="form1" runat="server">
    <%--Nothing in the form runat="server"--%>
    </form>
    <div>
        
        <div id="This_Goes_To_Action_A">
            <input type="radio" name="cbAction" checked="checked" 
                id="cbActionA" value="A" />
            <label for="cbActionA">Action A</label><br />
            <form id="ActionA" name="ActionA" action="ActionA">
                Name: <input type="text" name="Name" /><br />
                Email: <input type="text" name="Email" />
            </form>
        </div>
        <div id="This_Goes_To_Action_B">
            <input type="radio" name="cbAction" checked="checked" 
                id="cbActionB" value="B" />
            <label for="cbActionB">Action B</label><br />
            <form id="ActionB" name="ActionB" action="ActionB">
                Foo: <input type="text" name="Foo" /><br />
                Bar: <input type="text" name="Bar" />
            </form>
        </div>
        
        <button onclick="if (document.getElementById('cbActionA').checked)
                            alert('ActionA would submit.'); //document.ActionA.submit();
                         else if (document.getElementById('cbActionB').checked)
                            alert('ActionB would submit.'); //document.ActionB.submit();">Submit</button>
    
    </div>
</body>
</html> 

In some ways this got a lot cleaner, but in other ways it got a lot more complicated. First of all, I had to move the radio buttons outside of any forms as radio buttons only worked within a single form context. For that matter, there's a design problem here; it's better to put a submit button on each form than to use Javascript to determine which form to post based on a radio button--that way one doesn't have to manually check to see which radio button is checked, in fact one could drop the radio buttons altogether, and could have from the beginning even with ASP.NET postbacks; both scenarios can facilitate two submit buttons, one for each form. But I put the radio buttons in to illustrate one small example of where things inevitably get complicated on a complex page with multiple forms and multiple callback behaviors. In an AJAX-driven site, you should never (or rarely) use <input type="submit"> buttons, even if you have an onsubmit handler on your form. Instead, use plain <button>'s with onclick handlers, and control submission behavior with asynchronous XmlHttpRequest uses, and if you must leave the page for another, either use user-clickable hyperlinks (ideally to a RESTful HTML view URL) or use window.location. The window.location.reload() refreshes the page, and window.location.href=".." redirects the page. Refresh is useful if you really do want to stay on the same page but refresh your data. With no form postback, refreshing the page again or clicking the Back button and Forward button will not result in a browser dialogue box asking you if you want to resubmit the form data, which is NEVER an appropriate dialogue in an AJAX-driven site.

Another issue is that we are not taking advantage of jQuery at all and are using document.getElementById() instead.

Before we continue:

Tip: If at this point in your career path you feel more confident in ASP.NET Web Forms than in "advanced" HTML and DOM scripting, drop what you're doing and go become a master and guru of that area of web technology now.

ASP.NET Web Forms is harder to learn than HTML and DOM scripting, but I've found that ASP.NET and advanced HTML DOM can be, and often are, learned in isolation, so many competent ASP.NET developers know very little about "advanced" HTML and DOM scripting outside of the ASP.NET Web Forms methodology. But if you're trying to learn how to switch from postback-based coding to callback-based coding, we literally cannot continue until you have mastered HTML and DOM scripting. Here are some great books to read:

While you're at it, you should also grab:

Since this is also about jQuery, you need to have at least a strong working knowledge of jQuery before we continue.

The key problem with the above code, though, assuming that the commented out bit in the button's onclick event handler was used, is that the forms are still configured to redirect the entire page to post to the server, not AJAXy callback-style. What do we do?

First, bring back jQuery. We'll use the control we made earlier. (If you're using master pages, put this on the master page and forget about it so it's always there.)

..
<%@ Register src="~/Controls/jQuery.ascx" TagPrefix="local" TagName="jQuery" %>
..
<form id="form1" runat="server">
    <local:jQuery runat="server" />
</form> 

Next, to clean-up, replace all document.getElementById(..) with $("#..")[0]. This is jQuery's easier to read and write way of getting a DOM element by an ID. I know it looks odd at first but once you know jQuery and are used to it, $("#..")[0] is a very natural-looking syntax.

<button onclick="if ($('#cbActionA')[0].checked)
                    alert('ActionA would submit.'); //$('#ActionA')[0].submit();
                 else if ($('#cbActionB')[0].checked)
                    alert('ActionB would submit.'); //$('#ActionB')[0].submit();">Submit</button> 

Now we need to take a look at that submit() code and replace it.

One of the main reasons why we broke off <form runat="server"> and created two isolated forms is so that we can invoke jQuery's serialize() function to essentially create a string variable that would consist of pretty much the exact same serialization that would have been POSTed to the server if the form's default behavior executed, and the serialize() function requires the use of a dedicated form to process the conversion. The string resulting from serialize() is essentially the same as what's normally in an HTTP request body in a POST method.

Note: jQuery documentation mentions, "In order to work properly, serialize() requires that form fields have a name attribute. Having only an id will not work." But you must also give your <form> an id attribute if you intend to use $("#formid").

So now instead of invoking the appropriate form's submit() method, we should invoke a custom function that takes the form, serializes it, and POSTs it to the server, asynchronously. That was our objective in the first place, right?

So we'll add the custom function.

    <script language="javascript" type="text/javascript">
        function postFormAsync(form, fn, returnType) {
            var formFields = $(form).serialize();
            
            // set up a default POST completion routine
            if (!fn) fn = function(response) {
                alert(response);
            }; 

            $.post(
                $(form).attr("action"), // action attribute (url)
                formFields,             // data
                fn,                     // callback
                returnType              // optional
                );
        }
    </script> 

Note the fn argument, which is optional (defaults to alert the response) and which I'll not use at this time. It's the callback function, basically what to do once POST completes. In a real world scenario, you'd probably want to pass a function that redirects the user with window.location.href or else otherwise updates the contents of the page using DOM scripting. Note also the returnType; refer to jQuery's documentation for that, it's pretty straightforward. 

And finally we'll change the button code to invoke it accordingly.

<button onclick="if ($('#cbActionA')[0].checked)
                    postFormAsync($('#ActionA')[0]);
                 else if ($('#cbActionB')[0].checked)
                    postFormAsync($('#ActionB')[0]);">Submit</button> 

This works but it assumes that you have a callback URL handler waiting for you on the action="" argument of the form. For my own tests of this sample, I had to change the action="" attribute on my <form> fields to test to "ActionA.aspx" and "ActionB.aspx", these being new .aspx files in which I simply had "Action A!!" and "Action B!!" as the markup. While my .aspx files also needed to check for the form fields, the script otherwise worked fine and proved the point.

Alright, at this point some folks might still be squirming with irritation and confusion about the <form runat="server">. So now that we have jQuery performing AJAX callbacks for us, I still have yet to prove out any utility of having a <form runat="server"> in the first place, and what "staging" means in the context of the tip I stated earlier. Well, the automated insertion of jQuery and our page script at appropriate points within the page is in fact one example of "staging" that I'm referring to. But another kind of staging is data binding for initial viewing.

Let's consider the scenario where both of two forms on a single page have a long list of data-driven values.

Page:

...
<asp:DropDownList runat="server" ID="DataList1" />
<asp:DropDownList runat="server" ID="DataList2" />
... 

Code-behind / server script:

protected void Page_Load(object sender, EventArgs e)
{
    DataList1.DataSource = GetSomeData();
    DataList1.DataBind();

    DataList2.DataSource = GetSomeOtherData();
    DataList2.DataBind();
} 

Now let's assume that DataList1 will be used by the form Action A, and DataList2 will be used by the form Action B. Each will be "used by" their respective forms only in the sense that their <option> tags will be populated by the server at runtime.

Since you can only put these ASP.NET controls in a <form runat="server"> form, and you can only have one <form runat="server"> on the page, you cannot therefore simply put an <asp:DropDownList ... /> control directly into each of your forms. You'll have to come up with another way.

One-way data binding technique #1: Move the element, or contain the element and move the element's container.

You could just move the element straight over from the <form runat="server"> form to your preferred form as soon as the page loads. To do this (cleanly), you'll have to create a container <div> or <span> tag that you can predict an ID and wrap the ASP.NET control in it.

Basic example:

$("#myFormPlaceholder").append($("#myControlContainer")); 

Detailed example:

...
<div style="display: none" id="ServerForm">
    <%-- Server form is only used for staging, as shown--%>
    <form id="form1" runat="server">
        <local:jQuery runat="server" />
        <span id="DataList1_Container">
            <asp:DropDownList runat="server" ID="DataList1">
            </asp:DropDownList>
        </span>
        <span id="DataList2_Container">
            <asp:DropDownList runat="server" ID="DataList2">
            </asp:DropDownList>
        </span>
    </form>
</div>
...
<script language="javascript" type="text/javascript">
...
$().ready(function() {
    $("#DataList1_PlaceHolder").append($("#DataList1_Container"));
    $("#DataList2_PlaceHolder").append($("#DataList2_Container"));
});
</script>
<div>
    <div id="This_Goes_To_Action_A">
        ...
        <form id="ActionA" name="ActionA" action="callback/ActionA.aspx">
        ...
        DropDown1: <span id="DataList1_PlaceHolder"></span>
        </form>
    </div>
    <div id="This_Goes_To_Action_B">
        ...
        <form id="ActionB" name="ActionB" action="callback/ActionB.aspx">
        ...
        DropDown2: <span id="DataList2_PlaceHolder"></span>
        </form>
    </div>
</div> 

An alternative to referencing an ASP.NET control in its DOM context by using a container element is to register its ClientID property to script as a variable and move the server control directly. If you're using simple client <script> tags without registering them, you can use <%= control.ClientID %> syntax.

Page: 

<script language="javascript" type="text/javascript">
...
$().ready(function() {
    var DataList1 = $("#<%= DataList1.ClientID %>")[0];
    var DataList2 = $("#<%= DataList2.ClientID %>")[0];
    $("#DataList1_PlaceHolder").append($(DataList1));
    $("#DataList2_PlaceHolder").append($(DataList2));
});
</script> 

If you are using a literal and Page.ClientScript.RegisterClientScriptBlock, you won't be able to use <%= control.ClientID%> syntax, but you can instead use a pseudo-tag syntax like "{control.ClientID}", and then when calling RegisterClientScriptBlock perform a Replace() against that pseudo-tag.

Page: 

<asp:Literal runat="server" Visible="false" ID="ScriptLiteral">
<script language="javascript" type="text/javascript">
    ...
    $().ready(function() {
        var DataList1 = $("#{DataList1.ClientID}")[0];
        var DataList2 = $("#{DataList2.ClientID}")[0];
        $("#DataList1_PlaceHolder").append($(DataList1));
        $("#DataList2_PlaceHolder").append($(DataList2));
    });
</script>
</asp:Literal> 

Code-behind / server script:

protected void Page_Load(object sender, EventArgs e)
{
    ...
    Page.ClientScript.RegisterClientScriptBlock(
        typeof(MyMultiForm), "pageScript", 
        ScriptLiteral.Text
        .Replace("{DataList1.ClientID}", DataList1.ClientID)
         .Replace("{DataList2.ClientID}", DataList2.ClientID));
} 

For the sake of brevity (and as a tentative decision for usage on my own part), for the rest of this discussion I will use the second of the three, using old-fashioned <script> tags and <%= control.ControlID %> syntax to identify server control DOM elements, and then move the element directly rather than contain it.

One-way data binding technique #2: Clone the element and/or copy its contents.

You can copy the contents of the server control's data output to the place on the page where you're actively using the data. This can be useful if both of two forms, for example, each has a field that use the same data.

Page:

<script language="javascript" type="text/javascript">
function copyOptions(src, dest) {
    for (var o = 0; o < src.options.length; o++) {
        var opt = document.createElement("option");
        opt.value = src.options[o].value;
        opt.text = src.options[o].text;
        try {
            dest.add(opt, null); // standards compliant; doesn't work in IE
        }
        catch (ex) {
            dest.add(opt); // IE only
        }
    }
} 

$().ready(function() {
    var DataList1 = $("#<%= DataList1.ClientID %>")[0];
    copyOptions(DataList1, $("#ActionA_List")[0]);
    copyOptions(DataList1, $("#ActionB_List")[0]); // both use same DataList1
});
</script> 

... 

<form id="ActionA" ...>
    ...
    DropDown1: <select id="ActionA_List"></select>
</form>
<form id="ActionB" ...>
    ...
    DropDown1: <select id="ActionB_List"></select>
</form>


This introduces a sort of dynamic data binding technique whereby the importance of the form of the data being outputted by the server controls is actually getting blurry. What if, for example, the server form stopped outputting HTML and instead began outputting JSON? The revised syntax would not be much different from above, but the source data would not come from DOM elements but from data structures. That would be much more manageable from the persective of isolation of concerns and testability.

But before I get into that, what if things got even more tightly coupled instead? 

One-way data binding technique #3: Mangle the markup directly.

As others have noted, inline server markup used to be pooh pooh'd when ASP.NET came out and introduced the code-behind model. But when migrating away from Web Forms, going back to the old fashioned inline server tags and logic is like a breath of fresh air. Literally, it can allow much to be done with little effort.

Here you can see how quickly and easily one can populate a drop-down list using no code-behind conventions and using the templating engine that ASP.NET already inherently offers.

List<string>:

<select>
    <% MyList.ForEach(delegate (string s) {
            %><option><%=HttpUtility.HtmlEncode(s)%></option><%
        }); %>
</select>  

Dictionary<string, string>:

<select>
    <%  foreach (
           System.Collections.Generic.KeyValuePair<string, string> item
           in MyDictionary)
        {
            %><option value="<%= HttpUtility.HtmlEncode(item.Value) 
            %>"><%=HttpUtility.HtmlEncode(item.Key) %></option><%
        } %>
</select> 

For simple conversions of lists and dictionaries to HTML, this looks quite lightweight. Even mocking this up I am impressed. Unfortunately, in the real world data binding often tends to get more complex.

One-way data binding technique #4: Bind to raw text, JSON/Javascript, or embedded XML.

In technique #2 above (clone the element and/or copy its contents), data was bound from other HTML elements. To get the original HTML elements, the HTML had to be generated by the server. Technically, data-binding to HTML is a form of serialization. But one could also serialize the data model as data and then use script to build the destination HTML and/or DOM elements from the script data rather than from original HTML/DOM.

You could output data as raw text, such as name/value pair collections such as those formatted in a query string. Working with text requires manual parsing. It can be fine for really simple comma-delimited lists (see Javascript's String.split()), but as soon as you introduce the slightest more complex data structures such as trees you end up needing to look at alternatives.

The traditional data structure to work with anything on the Internet is XML. For good reason, too; XML is extremely versatile as a data description language. Unfortunately, XML in a browser client is extremely difficult to code for because each browser has its own implementation of XML reading/manipulating APIs, much more so than the HTML and CSS compliance differences between browsers.

If you use JSON you're working with Javascript literals. If you have a JSON library installed (I like the JsonFx Serializer because it works with ASP.NET 2.0 / C# 2.0) you can take any object that would normally be serialized and JSON-serialize it as a string on the fly. Once this string is injected to the page's Javascript, you can access the data as live Javascript objects rather than as parsed XML trees or split string arrays.

Working directly with data structures rather than generated HTML is much more flexible when you're working with a solution that is already Javascript-oriented rather than HTML-oriented. If most of the view logic is driven by Javascript, indeed it is often very nice for the script runtime to be as data-aware as possible, which is why I prefer JSON because the data structures are in Javascript's "native tongue", no translation necessary.

Once you've crossed that line, though, of moving your data away from HTML generation and into script, then a whole new door is opened where the client can receive pre-generated HTML as rendering templates only, and then make an isolated effort to take the data and then use the rendering templates to make the data available to the user. This, as opposed doing the same on the server, inevitably makes the client experience much more fluid. But at this point you can start delving into real AJAX...

One-way data binding technique #5: Scrap server controls altogether and use AJAX callbacks only.

Consider the scenario of a page that starts out as a blank canvas. It has a number of rendering templates already loaded, but there is absolutely no data on the intial page that is sent back. As soon as the page is loaded, however (think jQuery's "$(document).ready(function() { ... });) you could then have the page load the data it needs to function. This data could derive from a web service URL that is isolated from the page entirely--the same app, that is, but a different relative URL.

In an ASP.NET 2.0 implementation, this can be handled easily with jQuery, .ashx files, and something like the JsonFx JSON Serializer.

From an AJAX purist perspective, AJAX-driven data binding is by far the cleanest approach to client orientation. While it does result in the most "chatty" HTTP interaction, it can also result in the most fluid user experience and the most manageable web development paradigm, because now you've literally isolated the data tier in its entirety.

Working with data in script and synchronizing using AJAX and nothing but straight Javascript standards, the door flies wide open to easily convert one-way data binding to two-way data binding. Posting back to the server is a snap; all you need to do is update your script data objects with the HTML DOM selections and then push that data object out back over the wire, in the exact same way the data was retrieved but in reverse.

In most ways, client-side UI logic and AJAX is the panacea to versatile web UIs. The problem is that there is little consistent guidance in the industry especially for the .NET crowd. There are a lot of client-oriented architectures, few of them suited for the ASP.NET environment, and the ones that are or that are neutral are lacking in server-side orientations or else are not properly endorsed by the major players. This should not be the case, but it is. And as a result it makes compromises like ASP.NET AJAX, RoR+prototype+scriptaculous, GWT, Laszlo, and other combined AJAX client/server frameworks all look like feasible considerations, but personally I think they all stink of excess, learning curve, and code and/or runtime bloat in solution implementations.

kick it on DotNetKicks.com

Currently rated 3.1 by 8 people

  • Currently 3.125/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , ,

Web Development

Quick Tip: Use ASP.NET Templating To Generate E-mail Bodies Or Other Templated Things

by Jon Davis 10. October 2008 22:38

I've been in at least three jobs now where a significant amount of effort by others on the team was spent performing e-mail blasts to thousands or tens of thousands of [opted-in] users, whether for newsletters or for e-mail advertising campaigns. And I've also had countless encounters of requirements of a web site to send an e-mail to a single user, such as upon registration or user-to-user communications via web-based PM interfaces.

It has been surprising to me to discover how insistent development teams are with using alternative templating techniques to generate the pre-formatted e-mails, everything from:

  • .txt files with variable markers like {{USERNAME}}
  • NVelocity templates
  • Database records with text BLOBs and variable markers like %%USERNAME%%
  • C#-written code with lots and lots of myStringBuilder.Append()'s.

(I do like the database text BLOBs if only for simple plain-text formatted e-mails, as this sort of thing allows non-developer system administrators to customize their templates on the fly.)

Yet, never do people take advantage of the templating system they're already using. It's called ASP.NET.

Why would you use ASP.NET? Actually, in a web context, why wouldn't you might be the fairer question, because reasons for using it are obvious:

  • ASP.NET is one of the most powerful and versatile templating engines in use today. Why anyone would use NVelocity in a world where there's ASP.NET is beyond me.
  • You can take advantage of the full .NET Framework, web services, HttpContext features, and more.
  • You can data-bind to the database and output the same user-customized Rich HTML as you could with normal web pages.
  • You can add repeaters and other multi-dimensional variables that are not possible with flat templates.

Cons:

  • Requires the use of an insanely powerful service that is likely already running to support such things. (That would be IIS.)

Say, for instance, you're invoking it from a web page that has just processed some database transaction and you need to e-mail the user to inform him of a status or reminder or something. You can use Server.Execute() to invoke the ASP.NET template, which will read the results back into a Stream object. Pass a MemoryStream into it as a parameter, then flush that out into a StreamReader to read it into a System.String. Easy! What about context and state? All the more reason to use Server.Execute and not System.Net.WebClient. You can retain state, although in any case you can also build up a lengthy query string in your URL to be invoked, and then on the template you can perform whatever context synchronization that needs to be performed.

Here's a sample that demonstrates how (but not why).

Default.aspx:

<%@ Page Language="C#" AutoEventWireup="true"  CodeFile="Default.aspx.cs" Inherits="_Default" %> 

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> 

<html xmlns="http://www.w3.org/1999/xhtml">
<head runat="server">
    <title></title>
</head>
<body>
    <form id="form1" runat="server">
    <div>
        <%=TemplateOutput %>
    </div>
    </form>
</body>
</html> 

Default.aspx.cs:

protected string TemplateOutput
{
    get
    {

        MemoryStream ms = new MemoryStream();
        StreamWriter sw = new StreamWriter(ms);
        Context.Items["Question"] =

            "Is this a working template?";

        Server.Execute("Template.aspx", sw, true);
        sw.Flush();
        ms.Position = 0;
        StreamReader sr = new StreamReader(ms);
        return sr.ReadToEnd();
    }
}

Template.aspx:

<%@ Page Language="C#" %>

<%= Context.Items["Question"].ToString()
    .Replace("Is this", "This is").Replace("?", "!") %> 

Output:

This is a working template!

I hope this was helpful.

kick it on DotNetKicks.com

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , ,

Web Development | ASP.NET

Keys To Web 3.0 Design and Development When Using ASP.NET

by Jon Davis 9. October 2008 05:45

You can skip the following boring story as it's only a prelude to the meat of this post.

As I've been sitting at my job lately trying to pull off my web development ninja skillz I feel like my hands tied behind my back because I'm there temporarily as a consultant to add features, not to refactor. The current task at hand involves adding a couple additional properties to key user component in a rich web application. This requires a couple extra database columns and a bit of HTML interaction to collect the new settings. All in all, about 15 minutes, right? Slap in the columns into the database, update the SQL SELECT query, throw on a couple ASP.NET controls, add some data binding, and you're done, right? Surely not more than an hour, right?

Try three hours, just to add the columns to the database! The HTML is driven by a data "business object" that isn't a business object at all, just a data layer that has method stubs for invoking stored procedures and returns only DataTables. There are four types of "objects" based on the table being modified, and each type has its own stored procedure that ultimately proxies out to the base type's stored procedure, so that means at least five stored procedures for each CRUD operation affected by the addition. Overall, about 10 database objects were touched and as many C# data layer objects as well. Add to that a proprietary XML file that is used to map these data objects' DataTable columns, both in (parameters) and out (fields).

That's just the data. Then on the ASP.NET side, to manage event properties there's a control that's inheriting another control that is contained by another control that is contained by two other controls before it finally shows up on the page. Changes to the properties are a mix of hard-wired bindings to the lowest base control (properties) for some of the user's settings, and for most of the rest of the user's settings on the same page, CLR events (event args) are raised by the controls and are captured by the page that contains it all. There are at least five different events, one for each "section" of properties. To top it off, in my shame, I added both another "SaveXXX" event, plus I added another way of passing the data--I use a series of FindControl(..) invocation chains to get to the buried control and fetch the setting I wanted to add to the database and/or translate back out to the view. (I would have done better than to add more kludge, but I couldn't without being enticed to refactor, which I couldn't do, it's a temporary contract and the boss insisted that I not.)

To top it all off, just the simple CRUD stored procedures alone are slower than an eye blink, and seemingly showstopping in code. It takes about five seconds to handle each postback on this page, and I'm running locally (with a networked SQL Server instance).

The guys who architected all this are long gone. This wasn't the first time I've been baffled by the output of an architect who tries too hard to do the architectural deed while forgetting that his job is not only to be declarative on all layers but also to balance it with performance and making the developers' lives less complicated. In order for the team to be agile, the code must be easily adaptable.

Plus the machine I was given is, just like everyone else's, a cheap Dell with 2GB RAM and a 17" LCD monitor. (At my last job, which I quit, I had a 30-inch monitor and 4GB RAM which I replaced without permission and on my own whim with 8GB.) I frequently get OutOfMemoryExceptions from Visual Studio when trying to simply compile the code.

There are a number of reasons I can pinpoint to describe exactly why this web application has been so horrible to work with. Among them,

  • The architecture violates the KISS principle. The extremities of the data layer prove to be confounding, and buring controls inside controls (compositing) and then forking instances of them are a severe abuse of ASP.NET "flexibility".
  • OOP principles were completely ignored. Not a single data layer inherits from another. There is no business object among the "Business" objects' namespace, only data invocation stubs that wrap stored procedure execution with a transactional context, and DataTables for output. No POCO objects to represent any of the data or to reuse inherited code.
  • Tables, not stored procedures, should be used in basic CRUD operations. One should use stored procedures only in complex operations where multiple two-way queries must be accomplished to get a job done. Good for operations, bad for basic data I/O and model management.
  • Way too much emphasis on relying on Web Forms "featureset" and lifcycle (event raising, viewstate hacking, control compositing, etc.) to accomplish functionality, and way too little understanding and utilization of the basic birds and butterflies (HTML and script).
  • Way too little attention to developer productivity by failure to move the development database to the local switch, have adequate RAM, and provide adequate screen real estate to manage hundreds of database objects and hundreds of thousands of lines of code.
  • Admittance of the development manager of the sadly ignorant and costly attitude that "managers don't care about cleaning things up and refactoring, they just want to get things done and be done with it"--I say "ignorant and costly" because my billable hours were more than quadrupled versus having clean, editable code to begin with.
  • New features are not testable in isolation -- in fact, they aren't even compilable in isolation. I can compile and do lightweight testing of the data layer without more than a few heartbeats, but it takes two minutes to compile the web site just to see where my syntax or other compiler-detected errors are in my code additions (and I haven't been sleeping well lately so I'm hitting the Rebuild button and monitoring the Errors window an awful lot). 

Even as I study (ever so slowly) for MCPD certification for my own reasons while I'm at home (spare me the biased anti-Microsoft flames on that, I don't care) I'm finding that Microsoft end developers (Morts) and Microsofties (Redmondites) alike are struggling with the bulk of their own technology and are heaping up upon themselves the knowledge of their own infrastructure before fully appreciating the beauty and the simplicity of the pure basics. Fortunately, Microsoft has had enough, and they've been long and hard at the drawing board to reinvent ASP.NET with ASP.NET MVC. But my interests are not entirely, or not necessarily, MVC-related.

All I really want is for this big fat pillow to be taken off of my face, and all these multiple layers of coats and sweatshirts and mittens and ski pants and snow boots to be taken off me, so I can stomp around wearing just enough of what I need to be decent. I need to breathe, I need to move around, and I need to be able to do some ninja kung fu.

These experiences I've had with ASP.NET solutions often make me sit around brainstorming how I'd build the same solutions differently. It's always easy to be everyone's skeptic, and it requires humility to acknowledge that just because you didn't write something or it isn't in your style or flavor doesn't mean it's bad or poorly produced. Sometimes, however, it is. And most solutions built with Web Forms, actually, are.

My frustration isn't just with Web Forms. It's with corporations that build upon Internet Explorer rather than HTML+Javascript. It's with most ASP.NET web applications adopting a look-and-feel that seem to grow in a box that is controlled by Rendmondites, with few artistic deviators rocking the boat. It's with the server-driven view management rather than smart clients in script and markup. It's with nearly all development frameworks that cater towards the ASP.NET crowd being built for IIS (the server) and not for the browser (the client).

I intend to do my part, although intentions are easy, actions can be hard. But I've helped design an elaborate client-side MVC framework before, with great pride, I'm thinking about doing it again and implementing myself (I didn't have the luxury of real-world implementation [i.e. a site] last time, I only helped design it and wrote some of the core code) and open sourcing it for the ASP.NET crowd. I'm also thinking about building a certain kind of ASP.NET solution I've frequently needed to work with (CRM? CMS? Social? something else? *grin* I won't say just yet), that takes advantage of certain principles.

What principles? I need to establish these before I even begin. These have already worked their way into my head and my attitude and are already an influence in every choice I make in web architecture, and I think they're worth sharing.

1. Think dynamic HTML, not dynamically generated HTML. Think of HTML like food; do you want your fajitas sizzling when when it arrives and you have to use a fork and knife while you enjoy it fresh on your plate, or do you prefer your food preprocessed and shoved into your mouth like a dripping wet ball of finger-food sludge? As much as I love C#, and acknowledge the values of Java, PHP, Ruby on Rails, et al, the proven king and queen of the web right now, for most of the web's past, and for the indefinite future are the HTML DOM and Javascript. This has never been truer than now with jQuery, MooTools, and other (I'd rather not list them all) significant scripting libraries that have flooded the web development industry with client-side empowerment. Now with Microsoft adopting jQuery as a core asset for ASP.NET's future, there's no longer any excuse. Learn to develop the view for the client, not for the server.

Why? Because despite the fact that client-side debugging tools are less evolved than on the server (no edit-and-continue in VS, for example, and FireBug is itself buggy), the overhead of managing presentation logic in a (server) context that doesn't relate to the user's runtime is just too much to deal with sometimes. Server code often takes time to recompile, whereas scripts don't typically require compilation at all. While in theory there is plenty of control on the server to debug what's needed while you have control of it in your own predictable environment, in practice there are just too many stop-edit-retry cycles going on in server-oriented view management.

And here's why that is. The big reason to move view to the client is because developers are just writing WAY too much view, business, and data mangling logic in the same scope and context. Client-driven view management nearly forces the developer to isolate view logic from data. In ASP.NET Web Forms, your 3 tiers are database, data+view mangling on the server, and finally whatever poor and unlucky little animal (browser) has to suffer with the resulting HTML. ASP.NET MVC changes that to essentially five tiers: the database, the models, the controller, the server-side view template,and finally whatever poor and unlucky little animal has to suffer with the resulting HTML. (Okay, Microsoft might be changing that with adopting jQuery and promising a client solution, we'll see.)

Most importantly, client-driven views make for a much richer, more interactive UIX (User Interface/eXperience); you can, for example reveal/hide or enable/disable a set of sub-questions depending on if the user checks a checkbox, with instant gratification. The ASP.NET Web Forms model would have it automatically perform a form post to refresh the page with the area enabled/disabled/revealed/hidden depending on the checked state. The difference is profound--a millisecond or two versus an entire second or two.

2. Abandon ASP.NET Web Forms. RoR implements a good model, try gleaning from that. ASP.NET MVC might be the way of the future. But frankly, most of the insanely popular web solutions on the Internet are PHP-driven these days, and I'm betting that's because PHP is on a similar coding model as ASP classic. No MVC stubs. No code-behinds. All that stuff can be tailored into a site as a matter of discipline (one of the reasons why PHP added OOP), but you're not forced into a one-size-fits-all paradigm, you just write your HTML templates and go.

Why? Web Forms is a bear. Its only two advantages are the ability to drag-and-drop functionality onto a page and watch it go, and premier vender (Microsoft / Visual Studio / MSDN) support. But it's difficult to optimize, difficult to templatize, difficult to abstract away from business logic layers (if at least difficult in that it requires intentional discipline), and puts way too much emphasis on the lifecycle of the page hit and postback. Look around at the ASP.NET web forms solutions out there. Web Forms is crusty like Visual Basic is crusty. It was created for, and is mostly used for, corporate grunts who use B2B (business-to-business) or internal apps. The rest of the web sites who use ASP.NET Web Forms suffer greatly from the painful code bloat of the ASP.NET Web Forms coding model and the horrible end-user costs of page bloat and round-trip navigation.

Kudos to Guthrie, et al, who developed Web Forms, it is a neat technology, but it is absolutely NOT a one-size-fits-all platform any more than my winter coat from Minnesota is. So congratulations to Microsoft for picking up the ball and working on ASP.NET MVC.

3. Use callbacks, not postbacks. Sometimes a single little control, like a textbox that behaves like an auto-suggest combobox, just needs a dedicated URL to perform an AJAX query against. But also, in ASP.NET space, I envision the return of multiple <form>'s, with DHTML-based page MVC controllers powering them all, driving them through AJAX/XmlHttpRequest.

Why? Clients can be smart now. They should do the view processing, not the server. The browser standard has finally arrived to such a place that most people have browsers capable of true DOM/DHTML and Javascript with JSON and XmlHttpRequest support.

Clearing and redrawing the screen is as bad as 1980s BBS ANSI screen redraws. It's obsolete. We don't need to write apps that way. Postbacks are cheap; don't be cheap. Be agile; use patterns, practices, and techniques that save development time and energy while avoiding the loss of a fluid user experience. <form action="someplace" /> should *always* have an onsubmit handler that returns false but runs an AJAX-driven post. The page should *optionally* redirect, but more likely only the area of the form or a region of the page (a containing DIV perhaps) should be replaced with the results of the post. Retain your header and sidebar in the user experience, and don't even let the content area go white for a split second. Buffer the HTML and display it when ready.

ASP.NET AJAX has region refreshes already, but still supports only <form runat="server" /> (limit 1), and the code-behind model of ASP.NET AJAX remains the same. Without discipline of changing from postback to callback behavior, it is difficult to isolate page posts from componentized view behavior. Further, <form runat="server" /> should be considered deprecated and obsolete. Theoretically, if you *must* have ViewState information you can drive it all with Javascript and client-side controllers assigned to each form.

ASP.NET MVC can manage callbacks uniformly by defining a REST URL suffix, prefix, or querystring, and then assigning a JSON handler view to that URL, for example ~/employee/profile/jsmith?view=json might return the Javascript object that represents employee Joe Smith's profile. You can then use Javascript to pump HTML generated at the client into view based on the results of the AJAX request.

4. By default, allow users to log in without accessing a log in page. A slight tangent (or so it would seem), this is a UI design constraint, something that has been a pet peeve of mine ever since I realized that it's totally unnecessary to have a login page. If you don't want to put ugly Username/Password fields on the header or sidebar, use AJAX.

Why? Because if a user visits your site and sees something interesting and clicks on a link, but membership is required, the entire user experience is inturrupted by the disruption of a login screen. Instead, fade out to 60%, show a DHTML pop-up login, and fade in and continue forward. The user never leaves the page before seeing the link or functionality being accessed.

Imagine if Microsoft Windows' UAC, OS X's keyring, or GNOME's sudo auth, did a total clear-screen and ignored your action whenever it needed an Administrator password. Thankfully it doesn't work that way; the flow is paused with a small dialogue box, not flat out inturrupted.

5. Abandon the Internet Explorer "standard". This goes to corporate folks who target IE. I am not saying this as an anti-IE bigot. In fact, I'm saying this in Internet Explorer's favor. Internet Explorer 8 (currently not yet released, still in beta) introduces better web standards support than previous versions of Internet Explorer, and it's not nearly as far behind the trail of Firefox and WebKit (Safari, Chrome) as Internet Explorer 7 is. With this reality, web developers can finally and safely build W3C-compliant web applications without worrying too much about which browser vendor the user is using, and instead ask the user to get the latest version

Why? Supporting multiple different browsers typically means writing more than one version of a view. This means developer productivity is lost. That means that features get stripped out due to time constraints. That means that your web site is crappier. That means users will be upset because they're not getting as much of what they want. That means less users will come. And that means less money. So take on the "Write once, run anywhere" mantra (which was once Java's slogan back in the mid-90s) by writing W3C-compliant code, and leave behind only those users who refuse to update their favorite browsers, and you'll get a lot more done while reaching a broader market, if not now then very soon, such as perhaps 1/2 yr after IE 8 is released. Use Javascript libraries like jQuery to handle most of the browser differences that are left over, while at the same time being empowered to add a lot of UI functionality without postbacks. (Did I mention that postbacks are evil?)

6. When hiring, favor HTML+CSS+Javascript gurus who have talent and an eye for good UIX (User Interface/eXperience) over ASP.NET+database gurus. Yeah! I just said that!

Why? Because the web runs on the web! Surprisingly, most employers don't have any idea and have this all upside down. They favor database gurus as gods and look down upon UIX developers as children. But the fact is I've seen more ASP.NET+SQL guys who halfway know that stuff and know little of HTML+Javascript than I have seen AJAX pros, and honestly pretty much every AJAX pro is bright enough and smart enough to get down and dirty with BLL and SQL when the time comes. Personally, I can see why HTML+CSS+Javascript roles are paid less (sometimes a lot less) than the server-oriented developers--any script kiddie can learn HTML!--but when it comes to professional web development they are ignored WAY too much because of only that. The web's top sites require extremely brilliant front-end expertise, including Facebook, Hotmail, Gmail, Flickr, YouTube, MSNBC--even Amazon.com which most prominently features server-generated content but yet also reveals a significant amount of client-side expertise.

I've blogged it before and I'll mention it again, the one, first, and most recent time I ever had to personally fire a co-worker (due to my boss being out of town and my having authority, and my boss requesting it of me over the phone) was when I was working with an "imported" contractor who had a master's degree and full Microsoft certification, but could not copy two simple hyperlinks with revised URLs within less than 5-10 minutes while I watched. The whole office was in a gossipping frenzy, "What? Couldn't create a hyperlink? Who doesn't know HTML?! How could anyone not know HTML?!", but I realized that the core fundamentals have been taken for granted by us as technologists to such an extent that we've forgotten how important it is to value it in our hiring processes.

7.  ADO.NET direct SQL code or ORM. Pick one. Don't just use data layers. Learn OOP fundamentals. The ActiveRecord pattern is nice. Alternatively, if it's a really lightweight web solution, just go back to wring plain Jane SQL with ADO.NET. If you're using C# 3.0, which of course you are in the context of this blog entry, then use LINQ-to-SQL or LINQ-to-Entities. On the ORM side, however, I'm losing favor with some of them because they often cater to a particular crowd. I'm slow to say "enterprise" because, frankly, too many people assume the word "enterprise" for their solutions when they are anything but. Even web sites running at tens of thousands of hits a day and generating hundreds of thousands of dollars of revenue every month don't necessarily mean "enterprise". The term "enterprise" is more of a people management inference than a stability or quality effort. It's about getting many people on your team using the same patterns and not having loose and abrupt access to thrash the database. For that matter, the corporate slacks-and-tie crowd of ASP.NET "Morts" often can relate to "enterprise", and not even realize it. But for a very small team (10 or less) and especially for a micro ISV (developers numbering 5 or less) with a casual and agile attitude, take the word "enterprise" with a grain of salt. You don't need a gajillion layers of red tape. For that matter, though, smaller teams are usually small because of tighter budgets, and that usually means tighter deadlines, and that means developer productivity must reign right there alongside stability and performance. So find an ORM solution that emphasizes productivity (minimal maintenance and easily adaptable) and don't you dare trade routine refactoring for task-oriented focus as you'll end up just wasting everyone's time in the long run. Always include refactoring to simplicity in your maintenance schedule.

Why? Why go raw with ADO.NET direct SQL or choose an ORM? Because some people take the data layer WAY too far. Focus on what matters; take the effort to avoid the effort of fussing with the data tier. Data management is less important than most teams seem to think. The developer's focus should be on the UIX (User Interface/eXperience) and the application functionality, not how to store the data. There are three areas where the typical emphasis on data management is agreeably important: stability, performance (both of which are why we choose SQL Server over, oh, I dunno, XML files?) and queryability. The latter is important both for the application and for decision makers. But a fourth requirement is routinely overlooked, and that is the emphasis on being able to establish a lightweight developer workflow of working with data so that you can create features quickly and adapt existing code easily. Again, this is why a proper understanding of OOP, how to apply it, when to use it, etc, is emphasized all the time, by yours truly. Learn the value of abstraction and inheritence and of encapsulating interfaces (resulting in polymorphism). Your business objects should not be much more than POCO objects with application-realized properties. Adding a new simple data-persisted object, or modifying an existing one with, say, a new column, should not take more than a minute of one's time. Spend the rest of that time instead on how best to impress the user with a snappy, responsive user interface.

8. Callback-driven content should derive equally easily from your server, your partner's site, or some strange web service all the way in la-la land. We're aspiring for Web 3.0 now, but what happened to Web 2.0? We're building on top of it! Web 2.0 brought us mashups, single sign-ons, and cross-site social networking. FaceBook Applications are a classic demonstration of an excelling student of Web 2.0 now graduating and turning into a Web 3.0 student. Problem is, keeping the momentum going, who's driving this rig? If it's not you, you're missing out on the 3.0 vision.

Why? Because now you can. Hopefully by now you've already shifted the bulk of the view logic over to the client. And you've empowered your developers to focus on the front-end UIX. Now, though, the client view is empowered to do more. It still has to derive content from you, but in a callback-driven architecture, the content is URL-defined. As long as security implications are resolved, you now have the entire web at your [visitors'] disposal! Now turn it around to yourself and make your site benefit from it!

If you're already invoking web services, get that stuff off your servers! Web services queried from the server cost bandwidth and add significant time overhead before the page is released from the buffer to the client. The whole time you're fetching the results of a web service you're querying, the client is sitting there looking at a busy animation or a blank screen. Don't let that happen! Throw the client a bone and let it fetch the external resources on its own.

9. Pay attention to the UIX design styles of the non-ASP.NET Web 2.0/3.0 communities. There is such a thing as a "Web 2.0 look", whether we like to admit it or not; we web developers evolved and came up with innovations worth standardizing on, why can't designers evolve and come up with visual innovations worth standardizing on? If the end user's happiness is our goal, how are features and stable and performant code more important than aesthetics and ease of use? The problem is, one perspective of what "the Web 2.0 look" actually looks like is likely very different from another's or my own. I'm not speaking of heavy gloss or diagonal lines. I most certainly am not talking about the "bubble gum" look. (I jokingly mutter "Let's redesign that with diagonal lines and beveled corners!" now and then, but when I said that to my previous boss and co-worker, both of whom already looked down on me WAY more than they deserved to do, neither of them understood that I was joking. Or, at least, they didn't laugh or even smile.) No, but I am talking about the use of artistic elements, font choices and font styles, and layout characteristics that make a web site stand out from the crowd as being highly usable and engaging. 

Let's demonstrate, shall we? Here are some sites and solutions that deserve some praise. None of them are ASP.NET-oriented.

  • http://www.javascriptmvc.com/ (ugly colors but otherwise nice layout and "flow"; all functionality driven by Javascript; be sure to click on the "tabs")
  • http://www.deskaway.com/ (ignore the ugly logo but otherwise take in the beauty of the design and workflow; elegant font choice)
  • http://www.mosso.com/ (I really admire the visual layout of this JavaServer Pages driven site; fortunately I love the fact that they support ASP.NET on their product)
  • http://www.feedburner.com/ (these guys did a redesign not too terribly long ago; I really admire their selective use of background patterns, large-font textboxes, hover effects, and overall aesthetic flow)
  • http://www.phpbb.com/ (stunning layout, rock solid functionality, universal acceptance)
  • http://www.joomla.org/ (a beautiful and powerful open source CMS)
  • http://goplan.org/ (I don't like the color scheme but I do like the sheer simplicity
  • .. for that matter I also love the design and simplicity of http://www.curdbee.com/)

Now here are some ASP.NET-oriented sites. They are some of the most popular ASP.NET-driven sites and solutions, but their design characteristics, frankly, feel like the late 90s.

  • http://www.dotnetnuke.com/ (one of the most popular CMS/portal options in the open source ASP.NET community .. and, frankly, I hate it)
  • http://www.officelive.com/ (sign in and discover a lot of features with a "smart client" feel, but somehow it looks and feels slow, kludgy, and unrefined; I think it's because Microsoft doesn't get out much)
  • http://communityserver.com/ (it looks like a step in the right direction, but there's an awful lot of smoke and mirrors; follow the Community link and you'll see the best of what the ASP.NET community has to offer in the way of forums .. which frankly doesn't impress me as much as phpBB)
  • http://www.dotnetblogengine.net/ (my blog uses this, I like it well enough, but it's just one niche, and that's straight-and-simple blogs
  • http://subsonicproject.com/ (the ORM technology is very nice, but the site design is only "not bad", and the web site starter kit leave me shrugging with a shiver)

Let's face it, the ASP.NET community is not driven by designers.

Why? Why do I ramble on about such fluffy things? Because at my current job (see the intro text) the site design is a dump of one feature hastilly slapped on after another, and although the web app has a lot of features and plenty of AJAX to empower it here and there, it is, for the most part, an ugly and disgusting piece of cow dung in the area of UIX (User Interface/eXperience). AJAX functionality is based on third party components that "magically just work" while gobs and gobs of gobblygook code on the back end attempts to wire everything together, and what AJAX is there is both rare and slow, encumbered by page bloat and server bloat. The front-end appearance is amateurish, and I'm disheartened as a web developer to work with it.

Such seems to be the makeup of way too many ASP.NET solutions that I've seen.

10. Componentize the client. Use "controls" on the client in the same way you might use .ASCX controls on the server, and in the process of doing this, implement a lifecycle and communications subsystem on the client. This is what I want to do, and again I'm thinking of coming up with a framework to pursue it to compliment Microsoft's and others' efforts. If someone else (i.e. Microsoft) beats me to it, fine. I just hope theirs is better than mine.

Why? Well if you're going to emphasize the client, you need to be able to have a manageable development workflow.

ASP.NET thrives on the workflows of quick-tagging (<asp:XxxXxx runat="server" />) and drag-and-drop, and that's all part of the equation of what makes it so popular. But that's not all ASP.NET is good for. ASP.NET's greatest strengths are two: IIS and the CLR (namely the C# language). The quality of integration of C# with IIS is incredible. You get near-native-compiled-quality code with scripted text file ease of deployment, and the deployment is native to the server (no proxying, a la Apache->Tomcat->Java, or even FastCGI->PHP). So why not utilize these other benefits as a Javascript-based view seed rather than as generating the entirety of the view.

On the competitive front, take a look at http://www.wavemaker.com/. Talk about drag-and-drop coding for smart client-side applications, driven by a rich server back-end (Java). This is some serious competition indeed.

11. RESTful URIs, not postback or Javascript inline resets of entire pages. Too many developers of AJAX-driven smart client web apps are bragging about how the user never leaves the page. This is actually not ideal.

Why? Every time the primary section of content changes, in my opinion, it should have a URI, and that should be reflected (somehow) in the browser's Address field. Even if it's going to be impossible to make the URL SEO-friendly (because there are no predictable hyperlinks that are spiderable), the user should be able to return to the same view later, without stepping through a number of steps of logging in and clicking around. This is partly the very definition of the World Wide Web: All around the world, content is reflected with a URL.

12. Glean from the others. Learn CakePHP. Build a simple symfony or Code Igniter site. Watch the Ruby On Rails screencasts and consider diving in. And have you seen Jaxer lately?!

And absolutely, without hesitation, learn jQuery, which Microsoft will be supporting from here on out in Visual Studio and ASP.NET. Discover the plug-ins and try to figure out how you can leverage them in an ASP.NET environment.

Why? Because you've lived in a box for too long. You need to get out and smell the fresh air. Look at the people as they pass you by. You are a free human being. Dare yourself to think outside the box. Innovate. Did you know that most innovations are gleaning from other people's imaginative ideas and implemenations, and reapplying them in your own world, using your own tools? Why should Ruby on Rails have a coding workflow that's better than ASP.NET? Why should PHP be a significantly more popular platform on the public web than ASP.NET, what makes it so special besides being completely free of Redmondite ties? Can you interoperate with it? Have you tried? How can the innovations of Jaxer be applied to the IIS 7 and ASP.NET scenario, what can you do to see something as earth-shattering inside this Mortian realm? How can you leverage jQuery to make your web site do things you wouldn't have dreamed of trying to do otherwise? Or at least, how can you apply it to make your web application more responsive and interactive than the typical junk you've been pumping out?

You can be a much more productive developer. The whole world is at your fingertips, you only need to pay attention to it and learn how to leverage it to your advantage.

 

And these things, I believe, are what is going to drive the Web 1.0 Morts in the direction of Web 3.0, building on the hard work of yesteryear's progress and making the most of the most powerful, flexible, stable, and comprehensive server and web development technology currently in existence--ASP.NET and Visual Studio--by breaking out of their molds and entering into the new frontier.

kick it on DotNetKicks.com

Currently rated 3.0 by 4 people

  • Currently 3/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , ,

Opinion | Web Development | ASP.NET

Does 1 Millisecond Matter?

by Jon Davis 6. July 2008 20:31

I'm casually skimming an ASP.NET book for review purposes and I came across mention of the connection factory classes in ADO.NET 2.0.

I forgot about these; I've always seen abstract, app-specific DAL base classes that get implemented with a SQL, Access, or other database-based implementation, but I've never seen anyone use DbProviderFactories.

The book claims that these factory classes provide database neutrality in instantiating a database connection, so that you can use SqlConnection but also OdbcConnection, et al, without changing or recompiling any of the codebase, "without affecting the application's performance!"

No performance hit? Is it not using reflection? I fired up Reflector to introspect these classes, namely System.Data.Common.DbProviderFactories, System.Data.Common.DbConnection, System.Data.Common.DbCommand, and System.Data.Common.DbDataReader. Reflection is used. It's fine, relflection is there for a reason, but when used in any loop it is also notoriously slow (at least 10x the invocation time of a strongly referenced invocation). I suppose if the application has a very lightweight load, it might not matter.

I wrote and ran a performance comparison test in a console app. First I just ran two near-identical methods seperately, each in a loop (1000x), one method using DbProviderFactories and one just using SqlConnection, and both using SELECT to return all rows in a single-row, 4-column table. Then I realized it would be good to measure the performance of the last run of each, because the first few runs and especially the very first run will be expectedly slower due to runtime caching and JITing.

Here's the end result:

Factory:        23739 ticks / 2ms (total @ 1000x: 2331ms)
SqlClient:      11233 ticks / 1ms (total @ 1000x: 1321ms)

Now the question becomes, does 1 millisecond difference per connection instance matter, considering how high that number's gonna go when it goes over the wire and both data load and business logic is going to increase things to anywhere from 10ms to 1000ms?

Perhaps not. There is a difference, but it is subtle. The debate is kind of like the debate about "" versus String.Empty.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , ,

Software Development | C#

MVC On The Client In Javascript

by Jon Davis 1. April 2008 04:30

I stumbled across this over the weekend.

http://javascriptmvc.com/

I was actually very surprised by how closely it resembles what we've been working on at the office. Ours uses a controller to manage and control events and event propogation, track "view objects" (we call 'em "client controls" for drag-and-drop support in Visual Web Developer) and manage AJAX calls. And we've spec'd out to use RESTful URIs to manage data model retrieval and callbacks, and these are cacheable using Google Gears, Flash storage, or *shrug* cookies.

Theirs has a few additional features, though, some of which I think we can glean from, like:

  • script librarian ("Include"), which we don't need but I think we could accomplish using something like JSLoader
  • a complete ActiveRecord-like modeling pattern
  • a complete ASP-like templating system that executes on the client
  • "everything is a plug-in" philosophy

I like what I see, although our own framework goes further as it is built with ASP.NET, ASP.NET MVC, Visual Studio, and Expression Web all in mind. With ours, we enable our web designer, who is not an engineer, to create a complete, non-Flash RIA web pages without coding. Using Expression Web or Visual Web Developer, he can click on one of our controls in the Toolbox, drag it out to the page, absolutely position it, stylize it, give it a data source URI, and have it subscribe to other controls' events (think Flash video player, responding to the events of media playback controls). The entire multi-page web site will support executing in the rich execution environment of a single-page RIA application with a seamless user experience. And since the framework is not done in Flash (although Flash "client controls" are supported), it will support continuous extensions using the wonderfully universal languages of HTML and Javascript, both at design-time (creating new controls, customizing existing controls) and at runtime (RESTful fetches of web content, dynamic execution of JSON models, etc).

In some ways, ours is looking like http://www.wavemaker.com/, except that WaveMaker is based on Java and dojo, and the designer experience is in-page (which is way too much support overhead--why reinvent the designer when Visual Studio / Expression Web can do the job on its own?).

But I'd certainly recommend Javascript MVC (JavascriptMVC.com) as a skeleton foundation framework for someone to roll their own framework. We were thinking about open-sourcing our client bits once we are done with our prototype, but I think Javascript MVC comes close enough that it would do just as well to recommend that one instead. Mind you, I have never used it, I'm only suggesting it based on what I'm seeing at their web site.

kick it on DotNetKicks.com

Bypassing Cross-Site Scripting Using A Proxy

by Jon Davis 13. December 2007 09:46

When I implemented Sprinkle, which is a client-side includes (CSI) system I came up with that doesn't use IFRAMEs, I kept running into the scenario where you may want to fetch HTML from an external web site besides your own. This is sort of what Web 2.0 is all about, being able to mashup the world with not just your crap but everyone else's crap as well.

I threw together a trivial solution. This is ASP.NET-only, I might come up with a PHP-based equivalent. The idea is to implement a really trivial proxy server and cache the data for a period of time. In this particular implementation, I cache it directly into the web Application's in-memory collection.

Here's what using it might look like ..

        <%-- Client-side includes with server-side cross-site proxying --%>
        <script type="text/javascript" src="
http://sprinklejs.com/sprinkle.js"></script>
        <div src="proxy.aspx?url=http://www.sprinklejs.com/info.html" />
       
        <%-- Server-side includes with cross-site proxying--%>
        <ssi:ProxyControl runat="server" ID="GoogleInsertion"
            SourceUrl="
http://www.google.com/"
            DetectImposeBase="true"
            BaseUrl="proxy.aspx?url=http://www.google.com/" />

In the server-side include implementation, the DetectImposeBase and BaseUrl properties are really just hacks where I force-inject the proxy URL to any src and href element attributes.

If you try to use the above-referenced proxy.aspx file from an external web site, it should fail. The referer header can only be on the same host.

If you try to reference a very large binary file or something, it will fail. Maximum file size is enforced, so as to not overload the Application in-memory collection that hosts the proxy cache.

This implementation doesn't work flawlessly and it's sort of a prototype thing, it only took about an hour to hack together (plus some time I spent struggling with Visual Studio puking on me), but anyway, here it is.

Download: http://sprinklejs.com/SSI_Proxy_ASPNET.7z

kick it on DotNetKicks.com

ASP.NET: Rotated / Pivoted GridView

by Jon Davis 6. December 2007 11:17

I needed to rotate the content of my GridView control. I was databinding using business objects. Why is there no boolean pivot switch?

So I made my own control that inherits GridView. This is just a C# file; to use it, put it in the App_Code folder, and add <%@ Register Namespace="CustomControls" TagPrefix="Custom" %> to the ASPX file.

Not sure if it's buggy or not but it currently passes initial tests.

using System;
using System.Collections.Generic;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Xml;

namespace CustomControls {
    public partial class PivotDataGrid : System.Web.UI.WebControls.DataGrid
    {
        public PivotDataGrid() : base() { }

        public enum Pivot
        {
            /// <summary>
            /// Rotates the grid by rotating the data source in a replacement matrix. (not implemented)
            /// </summary>
            PreRender,
            /// <summary>
            /// Rotates the grid by converting the output to XML and rotating the cells.
            /// </summary>
            PostRender
        }

        private Pivot _PivotMode = Pivot.PostRender;
        public Pivot PivotMode
        {
            get { return _PivotMode; }
            set { _PivotMode = value; }
        }
        protected override void Render(HtmlTextWriter writer)
        {
            switch (PivotMode)
            {
                case Pivot.PostRender:
                    System.Text.StringBuilder sb = new System.Text.StringBuilder();
                    System.IO.StringWriter sw = new System.IO.StringWriter(sb);
                    HtmlTextWriter htw = new HtmlTextWriter(sw);
                    base.Render(htw);
                    System.Xml.XmlDocument xDoc = new System.Xml.XmlDocument();
                    xDoc.LoadXml(sb.ToString().Replace("&nbsp;", "&#160;"));
                    System.Xml.XmlDocument xDoc2 = new System.Xml.XmlDocument();
                    xDoc2.LoadXml(xDoc.OuterXml);
                    XmlNodeList rowNodes = xDoc.SelectNodes("//tr");
                    XmlNode trParent = null;
                    if (rowNodes.Count > 0) trParent = rowNodes[0].ParentNode;
                    rowNodes = xDoc2.SelectNodes("//tr");
                    XmlNode trParent2 = null;
                    if (rowNodes.Count > 0) trParent2 = rowNodes[0].ParentNode;
                    for (int i = rowNodes.Count - 1; i >= 0; i--)
                    {
                        rowNodes[i].ParentNode.RemoveChild(rowNodes[i]);
                    }

                    RotateRows(trParent, trParent2, xDoc2);
                    writer.Write(xDoc2.OuterXml);

                    break;
                default:
                    base.Render(writer);
                    return;
            }
        }

        protected override void DataBind(bool raiseOnDataBinding)
        {
            switch (PivotMode)
            {
                case Pivot.PreRender:
                    RotateDataSource();
                    base.DataBind(raiseOnDataBinding);
                    break;
                case Pivot.PostRender:
                    base.DataBind(raiseOnDataBinding);
                    break;
            }
        }

        void RotateDataSource()
        {
            //object src = base.DataSource;
            //System.Reflection.PropertyInfo[] srcProps = src.GetType().GetProperties(System.Reflection.BindingFlags.GetProperty);
            //Dictionary<string, Dictionary<string, object>> newSource = new Dictionary<string, object>();
            //DataKeyArray keys = this.DataKeys;
            throw new NotImplementedException("PivotMode of Pivot.PreRender has not yet been implemented.");
        }

        //Cell[,] RowsToCells
        void RotateRows(XmlNode sourceParentNode, XmlNode targetParentNode, // bool rotate
            XmlDocument targetContext)
        {
            bool rotate = true;
            XmlNodeList trNodes = sourceParentNode.SelectNodes("tr");
            int rowLen = trNodes.Count;
            int colLen = 0;
            if (rowLen > 0) colLen = trNodes[0].SelectNodes("td").Count;
            Cell[,] cells;
            List<XmlAttribute>[] rowAttribs = new List<XmlAttribute>[rowLen];
            if (!rotate) cells = new Cell[rowLen, colLen];
            else cells = new Cell[colLen, rowLen];
            for (int r = 0; r < rowLen; r++)
            {
                rowAttribs[r] = new List<XmlAttribute>();
                for (int a = 0; a < trNodes[r].Attributes.Count; a++)
                {
                    XmlAttribute attrib = targetContext.CreateAttribute(trNodes[r].Attributes[a].Name);
                    attrib.Value = trNodes[r].Attributes[a].Value;
                    rowAttribs[r].Add(attrib);
                }

                for (int c = 0; c < colLen; c++)
                {
                    XmlNode tdNode = trNodes[r].SelectNodes("td")[c];
                    Cell cell = new Cell();
                    //cell.Attributes = tdNode[c].Attributes;
                    cell.Attributes = new List<XmlAttribute>();
                    for (int a = 0; a < tdNode.Attributes.Count; a++)
                    {
                        XmlAttribute attrib = targetContext.CreateAttribute(tdNode.Attributes[a].Name);
                        attrib.Value = tdNode.Attributes[a].Value;
                        cell.Attributes.Add(attrib);
                    }
                    cell.InnerXml = trNodes[r].SelectNodes("td")[c].InnerXml;
                    if (rotate) cells[c, r] = cell;
                    else cells[r, c] = cell;
                }
            }
            //return cells;
            for (int cR = cells.GetLowerBound(0); cR <= cells.GetUpperBound(0); cR++)
            {
                XmlNode rowNode = targetContext.CreateElement("tr");
                for (int cC = cells.GetLowerBound(1); cC <= cells.GetUpperBound(1); cC++)
                {
                    Cell cell = cells[cR, cC];
                    XmlNode colNode = targetContext.CreateElement("td");
                    for (int a = 0; a < cell.Attributes.Count; a++)
                    {
                        colNode.Attributes.Append(cell.Attributes[a]);
                    }
                    colNode.InnerXml = cell.InnerXml;
                    if (rotate)
                        for (int rA = 0; rA < rowAttribs[cC].Count; rA++)
                        {
                            if (colNode.Attributes[rowAttribs[cC][rA].Name] != null)
                            {
                                colNode.Attributes[rowAttribs[cC][rA].Name].Value = rowAttribs[cC][rA].Value;
                            }
                            else
                            {
                                XmlAttribute newAttrib = targetContext.CreateAttribute(rowAttribs[cC][rA].Name);
                                newAttrib.Value = rowAttribs[cC][rA].Value;
                                colNode.Attributes.Append(newAttrib);
                            }
                        }
                    rowNode.AppendChild(colNode);
                }
                if (!rotate)
                    for (int rA = 0; rA < rowAttribs[cR].Count; rA++)
                    {
                        rowNode.Attributes.Append(rowAttribs[cR][rA]);
                    }
                targetParentNode.AppendChild(rowNode);
            }
        }

        class Cell
        {
            private List<XmlAttribute> _Attributes;

            public List<XmlAttribute> Attributes
            {
                get { return _Attributes; }
                set { _Attributes = value; }
            }

            private string _InnerXml;

            public string InnerXml
            {
                get { return _InnerXml; }
                set { _InnerXml = value; }
            }

        }
    }
}

kick it on DotNetKicks.com

Currently rated 4.0 by 1 people

  • Currently 4/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , ,

Open Source | Software Development | Web Development


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  November 2014  >>
MoTuWeThFrSaSu
272829303112
3456789
10111213141516
17181920212223
24252627282930
1234567

View posts in large calendar