Annoying SQL Server 2008 Bugs That Are Kinda Ridiculous

by Jon Davis 25. February 2009 13:06

Generally, rewriting an interface is supposed to take all of the previous interface's measures of quality, or else it shouldn't be done. That rarely happens because it's no fun to go back to last year's drawing board when reviewing next year's software. But when “quality standards” are measured so acutely as in enterprise software, there’s no excuse. On this one, I have to give Microsoft one deserved thumbs down.

SQL Server 2008 is a really big software product--big in its feature set, big in its quality standards, big in market footprint and prevelance, and big in terms of just sheer size on the hard disk, in RAM, and in Windows config munging (registry bits, services, etc.).

To be fair, I haven’t come across many serious SQL Server 2008 runtime bugs; the only one I found was the Ctrl+Shift+M editor bug I blogged about earlier.

No, this bug is in the installation process. SQL Server 2005 had an annoying installer bug where the services wouldn’t be shutdown before service packs installed, resulting in a reboot warning, and the services wouldn’t be restarted before the Vista user configurator ran (which was difficult to find to retry after failing the first try due to the services being down). Even the most recent service pack had this problem.

But SQL Server 2008 has a new installer.

No, I take that back. SQL Server 2008 requires a new installer. This is complaint #1. After sitting and waiting for one or two minutes for SQL Server Express 2008 w/ Adv. Tools & services to extract, it finally pukes and says “Sorry, you have to have a new version of the Windows Installer” (equivalent text). And then it unloads. No links, no help, just leaving you out in the cold. WHY ON EARTH DOES MICROSOFT REQUIRE THE USER TO TRACK DOWN A NEW INSTALLER RUNTIME BEFORE SQL SERVER 2008 WILL EVEN INSTALL?!! One big, fat “WHAT WERE THEY THINKING?!” for Microsoft. That thing should have been bundled with each and every SQL Server 2008 installer option, including SQL Server Express, et al.

Once the new Windows Installer version is found, and the computer enjoys the tiresome reboot it demands, and I sit through the one or two minute extraction again, I have expectations for it to be a rock-solid installer and experience. But no, it looks and feels like a Windows 98 Active Desktop window; DHTML style links everywhere.

But the biggest “WHAT’S WRONG WITH YOU, REDMOND?!” moment comes when you fire up the Installer, from within the DHTML-esque launcher window, and then as you start to see the progressbar work its way across the screen, you Alt-Tab to the first window—the launcher window—and close it, to clear up screen resources and desktop space. It closes fine. But a few seconds later, suddenly the real installer (the one with the progress bar) pukes in your face, as it complains that a bunch of files are missing. Ohnoes! Is it a corrupt CD/DVD image or download?

Nope, Microsoft just let their QA staff take a vacation when the brand spanking new installer that Microsoft DEMANDED that you install was actually put together.

What essentially happens is the launcher app is what the initial extractor is waiting to finish before all the extracted installer files are cleaned up. Once the launcher app with the DHTML-esque interface is shoved in your face, and you click on a link, the launcher window is neither minimized nor hidden until the launched application finishes. Failing that, the launched app would need to be in a modal window so you can’t close it. But none of this is the case. So you have to learn by being burned that closing the launcher app will destroy the installer in progress, including potentially its rollback files. Wee.

But, once installed, I’ll give it to Microsoft that I’m still excited about SQL Server 2008. It’s yesteryear’s news, but I have a lot to learn about it still—post-installed.

UPDATE: In addition to restoring this post (not sure why I deleted it) I wanted to mention also that some people, including myself, have struggled with another problem with SQL Server just taking forever—as in, like, 24 hours-ish—to install. I found the cause of this in the SQL 2005 days, and the same cause probably still exists in 2008. Basically, the ACL (Windows Access Control List) of each and every DLL and registry addition is manually verified by the installer against Windows Security / Active Directory. On a standalone machine, this is instantaneous, but in an Active Directory enterprise environment, what happens is the verification is sent to the AD controller, and the AD controller then echos the verification request to every tree in the forest!!

The official workaround to this for AD admins is to configure AD not to replicate the verification requests (it’s some weird setting buried deep in the AD settings, I don’t remember the MS Support link) when the forest is deemed too large to handle it. This will allow the AD controller to be self-sufficient in replying to these verifications all on its own.

But my own workaround was simpler: disconnect the network cable before installing, and reconnect it after it is installed.

The official workaround should have been that Microsoft just go give itself a spanking and come back when it is truly red-faced sorry for verifying the ACL on every single file and registry edit that it imposes.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

SQL Server

Simple Dependency Mapping

by Jon Davis 19. February 2009 07:19

Here’s my idea of simple dependency mapping, without an IoC container. The idea is to drop your binaries into the bin directory, optionally (yes, optionally) name your dependencies somewhere such as in app.config / web.config, and let ‘er rip.

I still hate Castle Windsor, et al, and will never write code around it if I can avoid it.

By the way, this is just DI, not IoC. For IoC without an IoC container, I still like eventing. See my past posts:

Dependency mapping implementation example (conceptual; one can and should use better code than this):

// Scenario infers IMyService as a referenced/shared interface-only type library
// used on both the invoker and the IMyService implementor.

// Example 1:
// match on Type.FullName
IMyService myService = (IMyService)Activator.CreateInstance(
	Dependency.GetDependency<IMyService>(ConfigurationManager.AppSettings["myServiceProvider"]));
myConsumer.MyDependency = myService;

// Example 2:
// match on Type.Name case insensitively; NOTE: This is not recommended.
IMyService myOtherService = (IMyService)Activator.CreateInstance(
	Dependency.GetDependency<IMyService>(ConfigurationManager.AppSettings["myServiceProvider2"], true));
myConsumer.MyDependency = myOtherService;

// Example 3:
// Only one match? Too lazy to name it? Just take it. NOTE: This is not recommended.
IMyService whateverService = (IMyService)Activator.CreateInstance(
	Dependency.GetDependency<IMyService>());
myConsumer.MyDependency = whateverService;

Code:

using System.Reflection; //, etc...

// ...

public class Dependency
{
    public static Dictionary<Type, List<Type>> KnownServices
        = new Dictionary<Type, List<Type>>();


    /// <summary>
    /// Returns any <see cref="Type"/> in the current <see cref="AppDomain"/>
    /// that is a <typeparamref name="T">T</typeparamref>. 
    /// </summary>
    /// <typeparam name="T">The type in the <see cref="AppDomain"/> that the
    /// return <see cref="Type"/> should be or inherit.</typeparam>
    /// <returns></returns>
    public static object GetDependency<T>()
    {
        return GetDependency<T>(null);
    }

    /// <summary>
    /// Returns any <see cref="Type"/> in the current <see cref="AppDomain"/>
    /// that is a <typeparamref name="T">T</typeparamref>. 
    /// If <paramref name="by_name"/> is not <code>null</code>, only the type(s) matching
    /// the specified <paramref name="by_name"/> with the <see cref="Type.FullName"/> is returned.
    /// </summary>
    /// <typeparam name="T">The type in the <see cref="AppDomain"/> that the
    /// return <see cref="Type"/> should be or inherit.</typeparam>
    /// <param name="by_name">The type name that should match the return value(s).</param>
    /// <example>object myService = Activator.CreateInstance(GetDependency&lt;IMyService&gt;);</example>
    public static object GetDependency<T>(string by_name)
    {
        return GetDependency<T>(by_name, false);
    }

    /// <summary>
    /// Returns any <see cref="Type"/> in the current <see cref="AppDomain"/>
    /// that is a <typeparamref name="T">T</typeparamref>. 
    /// If <paramref name="by_name"/> is not <code>null</code>, only the type(s) matching
    /// the specified name will be returned.
    /// </summary>
    /// <typeparam name="T">The type in the <see cref="AppDomain"/> that the
    /// return <see cref="Type"/> should be or inherit.</typeparam>
    /// <param name="by_name">The type name that should match the return value(s).</param>
    /// <param name="short_name_case_insensitive">If true, ignores the namespace,
    /// casing, and assembly name. For example, a match on type <code>Dog</code>
    /// might return both <code>namespace_a.doG</code> and <code>NamespaceB.Dog</code>.
    /// Otherwise, a match is made only if the <see cref="Type.FullName"/> matches
    /// exactly.
    /// </param>
    /// <returns>A <see cref="Type"/>.</returns>
    /// <example>object myService = Activator.CreateInstance(GetDependency&lt;IMyService&gt;);</example>
    public static object GetDependency<T>(string by_name, bool short_name_case_insensitive)
    {
        if (by_name != null && !short_name_case_insensitive)
        {
            return Type.GetType(by_name, true);
        }
        Init<T>();
        var t_svcs = KnownServices[typeof(T)];
        if (string.IsNullOrEmpty(by_name))
        {
            if (t_svcs.Count == 0) return null;
            if (t_svcs.Count == 1) return t_svcs[0];
            return t_svcs; // more than one, return the whole list
        }
        return t_svcs.Find(t => t.Name.ToLower() == by_name.ToLower());
    }

    private static readonly Dictionary<Type, bool> Inited = new Dictionary<Type, bool>();
    private static void Init<T>()
    {
        if (Inited.ContainsKey(typeof(T)) && Inited[typeof(T)]) return;
        if (!KnownServices.ContainsKey(typeof(T)))
            KnownServices.Add(typeof(T), new List<Type>());
        var refAssemblies = new List<Assembly>(AppDomain.CurrentDomain.GetAssemblies());
        foreach (var assembly in refAssemblies)
        {
            Type[] types = assembly.GetTypes();
            foreach (var type in types)
            {
                if (type.IsClass && !type.IsAbstract &&
                    typeof(T).IsAssignableFrom(type))
                {
                    KnownServices[typeof(T)].Add(type);
                }
            }
        }
        Inited[typeof(T)] = true;
    }
}

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

C# | Software Development

Windows 7 Will Thrash SSD-Based Systems (And Microsoft Won't Fix It)

by Jon Davis 16. February 2009 00:50

A couple weeks ago I successfully installed Windows 7 on my Dell Mini 9 netbook and in the end it's only using up about 600MB, which is one or two hundred more megabytes than XP, which confirms reports about it being lighter weight than Windows Vista, and suitable for netbooks.

Unfortunately, a feature introduced in Vista remains in Windows 7 that could pose a problem to the lifespan of these netbooks. This feature is a Scheduled Task that is preconfigured to defrag the primary hard drive every Wednesday night. This is a good feature for "normal" hard drives, but is bad news for SSDs. Defragmenting a SSD (Solid State Disk) drive is hard on the drive. A normal hard drive can handle millions of reads/writes. An SSD drive is limited to only tens of thousands of overwrites per sector or data bit. Furthermore, SSD drives have zero (0) seek time, so defragmentation is entirely pointless on an SSD drive. If the hard drive is significantly full and the files are often fragmented, the defragmentation process will do a lot of damage to the SSD drive.

I reported this here:

https://connect.microsoft.com/feedback/ViewFeedback.aspx?FeedbackID=410949&SiteID=647&wa=wsignin1.0

Unfortunately, Microsoft updated the Status on this to "Won't Fix" and commented that they might look at it in the next version of Windows after Windows 7.

Well it's not worth fighting about, but at least be aware. If you do install Windows 7 on an SSD-based netbook or other SSD-based computer, just be sure you fire up the Task Scheduler administrative tool and track down that scheduled task to disable it.

Nine Reasons Why 8GB Is Only Just Enough (For A Professional Business Software Developer)

by Jon Davis 13. February 2009 21:29

Today I installed 8GB on my home workstation/playstation. I had 8GB lying around already from a voluntary purchase for a prior workplace (I took my RAM back and put the work-provided RAM back in before I left that job) but the brand of RAM didn’t work correctly on my home PC’s motherboard. It’s all good now though, some high quality performance RAM from OCZ and my Windows 7 system self-rating on the RAM I/O jumped from 5.9 to 7.2.

At my new job I had to request a RAM upgrade from 2GB to 4GB. (Since it’s 32-bit XP I couldn’t go any higher.) I asked about when 64-bit Windows Vista or Windows 7 would be put on the table for consideration as an option for employees, I was told “there are no plans for 64-bit”.

The same thing happened with my last short-term gig. Good God, corporate IT folks everywhere are stuck in the year 2002. I can barely function at 4GB, can’t function much at all at 2GB.

By quadrupling the performance of your employee's system, you’d effectively double the productivity of your employee; it’s like getting a new employee for free.

If you are a multi-role developer and aren’t already saturating at least 4GB of RAM you are throwing away your employer’s money, and if you are IT and not providing at least 4GB RAM to developers and actively working on adding corporate support for 64-bit for employees’ workstations you are costing the company a ton of money due to productivity loss!! I don’t know how many times I’ve seen people restart their computers or sit and wait for 2 minutes for Visual Studio to come up because their machine is bogged down on a swap file. That was “typical” half a decade ago, but it’s not acceptable anymore. The same is true of hard drive space. Fast 1 Terabyte hard drives are available for less than $100 these days, there is simply no excuse. For any employee who makes more than X (say, $25,000), for Pete’s sake, throw in an extra $1000-$2000 or so more and get the employee two large (24-inch) monitors, at least 1TB hard drive(s) (ideally 4 drives in a RAID-0+1 array), 64-bit Windows Server 2008 / Windows Vista / Windows 7, a quad-core CPU, and 8 GB of some high performance (800+ GHz) RAM. It’s not that that’s another $2,000 or so to lose; it’s that just $2,000 will save you many thousands more dough. By quadrupling the performance of your employee's system, you’d effectively double the productivity of your employee; it’s like getting a new employee for free. And if you are the employee, making double of X (say, more than $50,000), and if your employer could somehow allow it (and they should, shame on them if they don’t and they won’t do it themselves), you should go out and get your own hardware upgrades. Make yourself twice as productive, and earn your pay with pride.

In a business environment, whether one is paid by the hour or salaried (already expected to work X hours a week, which is effectively loosely translated to hourly anyway), time = money. Period. This is not about developers enjoying a luxury, it’s about them saving time and employers saving money.

Note to the morons who argue “this is why developers are writing big, bloated software that suck up resources” .. Dear moron, this post is from the perspective of an actual developer’s workstation, not a mere bit-twiddling programmer—a developer, that is, who wears many hats and must not just write code but manage database details, work with project plans, document technical details, electronically collaborate with teammates, test and debug, etc., all in one sitting. Nothing in here actually recommends or even contributes to writing big, bloated software for an end user. The objective is productivity, your skills as a programmer are a separate concern. If you are producing bad, bloated code, the quality of the machine on which you wrote the code has little to nothing to contribute to that—on the contrary, a poor developer system can lead to extremely shoddy code because the time and patience required just to manage to refactor and re-test become such a huge burden. If you really want to test your code on a limited machine, you can rig VMWare / VirtualPC / VirtualBox to temporarily run with lesser RAM, etc. You shouldn’t have to punish yourself with poor productivity while you are creating the output. Such punishment is far more monetarily expensive than the cost of RAM.

I can think of a lot of reasons for 8+ GB RAM, but I’ll name a handful that matter most to me.

  1. Windows XP / Server 2003 alone takes up half a gigabyte of RAM (Vista / Server 2008 takes up double that). Scheduled tasks and other processes cause the OS to peak out at some 50+% more. Cost: 512-850MB. Subtotal @nominal: ~512MB; @peak: 850MB
  2. IIS isn’t a huge hog but it’s a big system service with a lot of responsibility. Cost: 50-150. Subtotal @nominal: ~550MB; @peak 1GB.
  3. Microsoft Office and other productivity applications should need to be used more than one at a time, as needed. For more than two decades, modern computers have supported a marvelous feature called multi-tasking. This means that if you have Outlook open, and you double-click a Microsoft Word attachment, and upon reading it you realize that you need to update your Excel spreadsheet, which in your train of thought you find yourself updating an Access database, and then you realize that these updates result in a change of product features so you need to reflect these details in your PowerPoint presentation, you should have been able to open each of these applications without missing a beat, and by the time you’re done you should be able to close all these apps in no more than one passing second per click of the [X] close button of each app. Each of these apps takes up as much as 100MB of RAM, Outlook typically even more, and Outlook is typically always open. Cost: 150-1GB. Subtotal @nominal: 700MB; @peak 2GB.
  4. Every business software developer should have his own copy of SQL Server Developer Edition. Every instance of SQL Server Developer Edition takes up a good 25MB to 150MB of RAM just for the core services, multiplied by each of the support services. Meanwhile, Visual Studio 2008 Pro and Team Edition come with SQL Server 2005 Express Edition, not 2008, so for some of us that means two installations of SQL Server Express. Both SQL Server Developer Edition and SQL Server Express Edition are ideal to have on the same machine since Express doesn’t have all the features of Developer and Developer doesn’t have the flat-file support that is available in Express. SQL Server sitting idly costs a LOT of CPU, so quad core is quite ideal. Cost: @nominal: 150MB, @peak 512MB. Subtotal @nominal: 850MB; @peak: 2.5GB. We haven’t even hit Visual Studio yet.
  5. Except in actual Database projects (not to be confused with code projects that happen to have database support), any serious developer would use SQL Server Management Studio, not Visual Studio, to access database data and to work with T-SQL tasks. This would be run alongside Visual Studio, but nonetheless as a separate application. Cost: 250MB. Subtotal @nominal: 1.1GB; @peak: 2.75GB.
  6. Visual Studio itself takes the cake. With ReSharper and other popular add-ins like PowerCommands installed, Visual Studio just started up takes up half a gig of RAM per instance. Add another 250MB for a typical medium-size solution. And if you, like me lately, work in multiple branches and find yourself having to edit several branches for different reasons, one shouldn’t have to close out of Visual Studio to open the next branch. That’s productivity thrown away. This week I was working with three branches; that’s 3 instances. Sample scenario: I’m coding away on my sandbox branch, then a bug ticket comes in and I have to edit the QA/production branch in an isolated instance of Visual Studio for a quick fix, then I get an IM from someone requesting an immediate resolution to something in the developer branch. Lucky I didn’t open a fourth instance. Eventually I can close the latter two instances down and continue with my sandbox environment. Case in point: Visual Studio costs a LOT of RAM. Cost @nominal 512MB, @peak 2.25GB. Subtotal @nominal: 1.6GB; @peak: 5GB.
  7. Your app being developed takes up RAM. This could be any amount, but don’t forget that Visual Studio instantiates independent web servers and loads up bloated binaries for debugging. If there are lots of services and support apps involved, they all stack up fast. Cost @nominal: 50MB, @peak 750MB. Subtotal @nominal: 1.65GB; @peak: 5.75GB.
  8. Internet Explorer and/or your other web browsers take up plenty of RAM. Typically 75MB for IE to be loaded, plus 10-15MB per page/tab. And if you’re anything like me, you’ll have lots and lots and LOTS of pages/tabs by the end of the day; by noon I typically end up with about four or five separate IE windows/processes, each with 5-15 tabs. (Mind you, all or at least most of them are work-related windows, such as looking up internal/corporate documents on the intranet or tracking down developer documentation such as API specs, blogs, and forum posts.) Cost @nominal: 100MB; @peak: 512MB. Subtotal @nominal: 1.75GB; @peak: 6.5GB.
  9. No software solution should go untested on as many platforms as is going to be used in production. If it’s a web site, it should be tested on IE 6, IE 7, and IE 8, as well as current version of Opera, Safari 3+, Firefox 1.5, Firefox 2, and Firefox 3+. If it’s a desktop app, it should be tested on every compatible version of the OS. If it’s a cross-platform compiled app, it should be tested on Windows, Mac, and Linux. You could have an isolated set of computers and/or QA staff to look into all these scenarios, but when it comes to company time and productivity, the developer should test first, and he should test right on his own computer. He should not have to shutdown to dual-boot. He should be using VMWare (or Virtual PC, or VirtualBox, etc). Each VMWare instance takes up the RAM and CPU of a normal system installation; I can’t comprehend why it is that some people think that a VMWare image should only take up a few GB of hard drive space and half a gig of RAM; it just doesn’t work that way. Also, in a distributed software solution with multiple servers involved, firing up multiple instances of VMWare for testing and debugging should be mandatory. Cost @nominal: 512MB; @peak: 4GB. Subtotal @nominal: 2.25GB; @peak: 10.5GB.

Total peak memory (64-bit Vista SP1 which was not accounted in #1): 11+GB!!!

Now, you could argue all day long that you can “save money” by shutting down all those “peak” processes to use less RAM rather than using so much. I’d argue all day long that you are freaking insane. The 8GB I bought for my PC cost me $130 from Dell. Buy, insert, test, save money. Don’t be stupid and wasteful. Make yourself productive.

Let Me Google That For You

by Jon Davis 9. February 2009 20:59

I was hanging out in the C# IRC channel on FreeNode and someone blew his top with a newbie coder with a few "RTFM" statements.

He gave a funny link. Instead of saying "Google it" or giving a google query URL, one can use:

http://www.letmegooglethatforyou.com/?q= << your query here, i.e.

http://www.letmegooglethatforyou.com/?q=how+do+I+do+threading+in+C%23

C# ‘var’ Not So Bad

by Jon Davis 2. February 2009 12:01

Been using ‘var’ pretty regularly lately. It doesn’t bother me anymore. I’ve come to realize that it’s just like VB’s

Dim myVal As New MyType

.. which I enjoyed using back in the day for its strong typing yet terse coding style.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

15 Reasons To Stay The Heck Away From Linux

by Jon Davis 1. February 2009 00:59

Linux folks have their own reasons why people should use Linux. I get sick of it sometimes. I’ve tinkered with many Linux distros, and no, not as a novice user. Combined, I’ve probably spent a couple years of nonstop time on Linux. I solved many problems like setting up services and writing apps. And so far I’ve reached my own conclusions.

  1. Linux is a religious cult. You remember the wacko in Waco? With the visions of heaven and the gun-wielding and unrealistic plans of taking over the world and all? ... People who swear by Linux for the most part only hate Windows for the sake of hating Microsoft; it has little to nothing to do with the overall quality of the Windows product. They just feel that they have the moral prerogative to spread hate of Microsoft throughout the world.
  2. The only people who swear by Linux are either broke (or just cheap), grew up in a religious cult, or tasted a really old and archaic version of Mac and Windows (like Mac OS 6 and NT 4.0), ran away, and never looked back. Typically all of the above.
  3. There really isn’t much available in Linux that you don’t get cross-compiled in either the latest version of Windows or the latest version of Mac OS X. No, not Apache, not Perl, not Python, not Java, not Eclipse, not Mono, not Blender 3D, not OpenOffice, not Pidgin, not BIND, not OpenSSH, not most business apps, not many network apps, not many libs, not most desktop apps. Whatever really is exclusive, between Windows SUA and Cygwin, almost everything that runs in Linux is available in Windows—with a few minor multimedia exceptions of which there are better-quality Windows or OS X commercial alternatives anyway. And because Mac OS X is built on other flavors of UNIX, the same is true of it as well.
  4. Windows and, to an extent, Mac OS, takes the audacious effort to keep user-exposed details sane and organized, with the regretful exception of Microsoft’s abuse of the Windows registry. The archaic, confusing organization of file structure (/var, /usr, etc) in Linux, and the lack of consistency with configurations on par with Windows’ registry abuse, reflect the decades-old history of *nix. None of that is even necessary to just run and/or write and debug your software unless you do choose *nix; code is code, and with proper software abstractions and clean organization it shouldn’t be necessary to retain filthy rotten legacy organization and patterns.
  5. With Linux, there is no single entity who is accountable to you as a user for the successful evolution of major design characteristics of the operating system, of OS APIs, or of the user-interfacing shells. No one Linux distro can take on significant scope, and even then unless you are a paying customer just like Microsoft’s paying customers the maintainers of a distro have no significant incentive to care.
  6. Linux is not an original OSS effort; it’s a freeware x86 port from a third party commercial OS that’s been “grown” and that has evolved with bubblegum and tape (as in like Windows 9x).
  7. Linux hasn’t ever enjoyed a serious architectural redesign and rework like the Mac has enjoyed once or twice (OS 1-8 to OS 9 to particularly OS X) and that Microsoft has enjoyed three times (DOS to Windows 1-3.x to Windows 9x to particularly Windows NT/2000/XP/Vista/7 .. and Singularity is a sign of what’s coming down the road).
  8. The claims that the latest versions of Linux are more stable than the latest versions of Windows or of OSX are just plain ignorant. Windows has come a LONG way, and if you’re going to go *nix, OS X is probably a safer bet. For that matter, Windows SUA (Subsystem for UNIX Application) is probably a fairly safe bet if you’re going UNIX for your apps as well.
  9. You don’t get Direct3D on Linux. (OpenGL, which is more broadly available but less powerful than Direct3D, is available on Windows, so Linux doesn’t have an exclusive counter-argument. Incidentally, because of hardware driver limitations, OpenGL on Linux is far less available than either OpenGL or Direct3D on Windows or on Mac OS.)
  10. You don’t get PowerShell on Linux. (Bash, Python, and the like, which are more broadly available but less powerful than PowerShell, are available on Windows, so Linux doesn’t have an exclusive counter-argument.)
  11. You don’t get IIS 7 on Linux. (Apache, which is more broadly available but less powerful than IIS 7, is available on Windows, so Linux doesn’t have an exclusive counter-argument.)
  12. Despite Mono, you don’t get full-blown .NET nor Visual Studio (the most popular IDE on the planet) on Linux. No WCF, no WPF, no LINQ. (Python, Java, GTK#, Qt, wxWidgets, et al, which are more broadly available but less powerful than the Windows-based offerings, are available on Windows, so Linux doesn’t have an exclusive counter-argument.)
  13. Comparatively speaking, unproductivity is almost guaranteed. Unless you like to judge yourself on the basis of geeky so-called “real programming” (also aka “bit-twiddling”, and likewise you find scientific apps intellectually stimulating), you won’t have nearly as much fun just getting stuff done and with high quality results as you get with the most versatile operating system and developer toolset available to computing: modern Visual Studio on modern Windows.
  14. While Linux has evolved over time, it has not evolved at the same rate as Mac OS and Windows as of late. Evolution of solutions built on Linux do little to enhance or place demands upon Linux growth; these changes of growth and improvements are mostly self-serving, or improve upon application components only, not Linux itself, except only adding to the add-on repo dogpiles, whereas Apple and Microsoft both strive hard to significantly re-tailor their operating systems to meet the demands of both the applications developer and the end-user via evolutions of the operating system, particularly in improvements to its core APIs, its well-integrated services, and its shells.
  15. Except to be cheap or to roll your own flavor (custom toppings), there’s really not much point to go Linux unless your systems are already built around it. Linux is great for nearly-free personal server hosting and for cheap and simple scaling out. But it’s quite a small wonder that, despite Red Hat, Ubuntu, SuSe, Mandriva, and other hopefuls, Linux has hardly made a splash in desktop space. If you’re looking for a solid server, while yes there is Cent OS, there are other options like OpenSolaris and Windows Server 2008, the latter of which should be taken very seriously by anyone who is serious about architecture flexibility and stability.

Tags:

Linux


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  November 2018  >>
MoTuWeThFrSaSu
2930311234
567891011
12131415161718
19202122232425
262728293012
3456789

View posts in large calendar