Game Programming Wiki

by Jon Davis 1. September 2007 04:32

Very cool link for anyone wanting to get into game programming: http://gpwiki.org/

Currently rated 4.0 by 1 people

  • Currently 4/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

Open Source | Computers and Internet | Software Development | PC Gaming | Linux | Cool Tools | Xbox Gaming

Open-Source Desktop: Giving It Another Go

by Jon Davis 31. August 2007 15:02

A week or so ago I posted a blog entry describing why I felt that Linux simply isn't the long-term answer for the need for an open-source, community-supported desktop operating system. I got some good feedback on this, as well as some not-so-helpful feedback ("here's another troll", "this goes out to everyone ELSE, not to Jon Davis, who has clearly made up his mind", etc). I also commented that Haiku looks beautiful and is quite promising since it is based on (that is, inspired by and compatible with) Be OS, which is the closest I've seen yet to an OS done right, but that Haiku won't be the answer, either, until it gets past R1, which may or may not ever happen. Meanwhile, I brought up ReactOS and how it isn't the answer, either, because if we wanted Windows we can just install Windows. (Couldn't say that about Haiku / Be OS because Be OS is no longer available.) Over the last week I downloaded the latest React OS build and ran it in VMWare. It's not nearly as far along as Haiku is, in terms of stability (not to mention the front-end aesthetic talent). Finally, one of my biggest complaints about Linux--the rediculously arcane file system layout which never seems to go away--seemed to have been resolved in Gobo Linux, until I realized that it's even worse: it's Proper Cased, and since Linux uses a case-sensitive file naming system (which sucks), that makes Gobo Linux nearly unusable for administration, having to constantly check the case of each and every letter rather than just trust that everything will be lower case.

I came across a few interesting tidbits of information since that post. I also received my old laptop from repair (replace the keyboard for a missing 'O' key), an Acer Aspire 5050 that I got at Wal-Mart about ten or eleven months ago, and I decided that since the laptop has since been replaced, before I go pawn it off I should format the drives and actually try installing Ubuntu Linux on it so that no one can say that I've only tried Ubuntu within VMWare. Unfortunately, the latest Ubuntu Live CD doesn't even boot on my newer Toshiba Satellite X205-S9359, so I couldn't even try to install it on my old laptop hard drives that were replaced. Sheesh.

Compiz-Fusion seems to be for Linux what Aero is for Windows, at least in theory. I still have not gotten it to work; "GL Desktop", which I assume is related, doesn't seem to do anything when I turn it on from the System menu. OpenGL stuff does work--I ran the OpenGL implementation of Tux Racer, worked beautifully. I tried the drivers from ATI/AMD but the stuff won't execute. Changing the Compositing option in the xorg.conf file doesn't help. *sigh* Oh well I'll keep tinkering.  UPDATE: I did get it to work, partially. At least, I get the wobbly windows. Not much else, though, like I can't get the Emerald themes to turn on. Seems there's some limitations on my video card chipset such that they disabled 3D support (even though my video chipset fully supports high-performance 3D).

Device support for Linux at install-time is improving, as are the tools for device support, but it is still an awful mess. I have never seen so many files fly across my terminal screen in my life just to try to install the ALSA audio driver. And with it installed, there's still no sound. Hello, guys? If there's no sound, the audio control panels shouldn't behave like everything's hunky dory. For the specific sound card driver I've scoured Google and Ubuntu forums regarding ALC883 support and it's clear that other people are having trouble with this sound card, so I'll have to keep tinkering with this. But that's not the point with regard to it being a mess. Scanning the forums for support, it confounds me how comfortable people are with opening up configuration files and toying with them, the only difference now is that they use gedit instead of vi. Holy cow, someone needs to give these Linux developers a lesson on UX! It has nothing to do with editing in a Windows-like editor--I have actually finally gotten used to vi, and prefer not to have to move my hand to a mouse to reach the scrollbar.

The ideal desktop operating system should not use antiquated techniques like service-proprietary configuration files, or configure / make / make install, not even if that stuff is hidden from view using some wrapper shell which in the long run only makes things more complex. In fact, compiling anything outside of a JIT'er seems rediculously arcane to me. Call it a matter of opinion, but that's one thing I really like about having a central authority (like Microsoft) to basically say this is exactly how hardware drivers should be deployed. Don't get me wrong, Windows is a mess of its own with its Registry mess, etc. But then why do you think the notion of open source desktops has gotten me curious (and critical) lately?

On that point, yes, I get it, I get that Linux's advantage is that, being an open system, it is necessary for stuff to be "recompiled into the system" but really makes me wonder why Bill Gates, rather than Linus Torvolds, is given the Borg treatment, when assimilation is done in Linux at a technical level at runtime in much the same way Microsoft traditionally did using business agreements. For that matter, what's so wonderful about a system being "open" for some source code to compile against any of its many flavors and then (crossing fingers) maybe run, as opposed to having just a few "flavors" and putting time and energy (and, yes, money) into making sure that the stuff has already been compiled, is already known to run, and will almost certainly "just work", assuming that there are no device hardware driver conflicts.

Wouldn't it be ideal if there was a cleaner pluggable hardware abstraction model that the operating system exposed and that drivers could just plug into in a cleaner fashion than the nerdy way the stuff is managed now? Isn't that essentially what a kernel is supposed to do, along with executing user-level applications? Virtual device drivers suddenly popped into my head, a la VMWare and its virtual machine and virtual devices. Abstraction is so cool. Say, why can't each basic hardware function that software expects to be able to use--file system I/O, video card / display, audio, keyboard, mouse--all be cleanly tucked into a clean API that the hardware manufacturers' drivers sit on top of, rather than vice-versa. Why can't we put them into a virtual hardware sandbox? Why can't hypervisors be taken to such an extent as to allow for physical base-level hardware to be virtualized, so that each hardware device driver "sees" a virtualized reality?

Of course, then performance and virtualization management become the huge issues.

More importantly, this isn't particularly a realistic notion since currently hardware drivers literally read/write to/from memory spaces that the kernel maps to the physical device, and execute by way of things like IRQ events. Sometimes I wonder, though, why even that shouldn't be rethought. But now I'm getting into real physical hardware design space, so it's not like I can just pull up a trusty C compiler and recompile a new motherboard. Besides, putting hardware device manufacturers into a software sandbox certainly stifles their opportunities to innovate.

Over last weekend I thought it would be cool, if naive, to actually spawn off YAOS (Yet Another Operating System), derived from nothing, but appended with virtual support for Win32 (like WINE) and Linux (like Cygwin), but inherently have its own system. After all, that is essentially what hypervisor operating systems propose to do. The difference is that it would be ideal if the hypervisor operating system itself could be a viable operating system.

VMWare ESX and Xen, being hypervisor operating systems, run on the Linux kernel variants.

Windows Server 2008, having hypervisor support (however limited, I'm not sure), runs on, well, Windows.

Good starts on hypervisor concepts, but why not take the opportunity to flush out this legacy stuff and build a hypervisor-supporting system that can also be a new OS? Oh how I would love it if, once R1 stablizes, the Haiku operating system added hypervisor support!

I'll post to my blog here at http://www.jondavis.net/blog/ as I continue to tinker with Ubuntu on real hardware.

Gobo Linux: My Wish Has Come True!

by Jon Davis 20. August 2007 21:39

Over the weekend I was whining (a lot) about the lame directory structure in Linux and how rediculously impossible the gobbligook is to understand at a glance. Someone pointed me to this link, which explains the mess that it is, and when I looked at it I discovered exactly what I was dreaming of: a version of Linux that has the whole structure cleaned up and making sense!!!

Gobo Linux! http://en.wikipedia.org/wiki/GoboLinux  ...   http://www.gobolinux.org/

Downloading now!!

Too bad, though, the directory names are in Mixed Case. Since Linux is case-sensitive, this'll be cumbersome... 

UPDATE: Poking around at it now. Yeah, those Mixed Case names are a real bear. The concept is precisely what I had in mind -- refactor the whole directory structure, and then just use symbolic links for the old system in order to have compatibility. But not THESE file names, not Mixed Case. This makes me wish I could just go and run with GoboLinux and make a distribution of Gobo that goes back to lower-case names....

Why Linux Isn't The Open Source Desktop Answer

by Jon Davis 19. August 2007 01:59

On the PC, Windows wins, Linux loses, and I'm not even cheering for either side. The Mac is just off on the side, looking pretty like a squad of cheerleaders, but I have my eyes fixated straight up at the stars above. Even though I am sitting on the Windows side of the field, with my money invested in the team only because that's where I know the money will come back to me, I'm feeling bummed out, if more than just bored with this two- or three-sided competition.

Let's face it, there are only three major computer operating systems having any relevance in the modern personal computer markeplace: Mac OS, *nix, and Windows. And since Mac OS is, as of OS X, a rewrite built on Mach, a variant of *nix, there are actually only two, but since so much originality remains invested in the Mac environment it deserves recognition of its own.

And then there was BeOS, a truly original operating system that was complete, thorough, fast, cool, beautiful, happily geeky yet user friendly enough for my mother to use, but tragically lost to a failure to win the market. The intellectual assets of its demise went the way of mobile computing, which reminds me a great deal of the exact same process that was experienced by GEOS (my first exposure to a *real* user operating system) a decade prior, which in my view was the most innovative output of pure genius ever to execute on 64k bytes of silicon. GEOS gave the early Macintosh a run for its money, much like BeOS did ten years later, but both and especially the latter struggled to retain (or, for all I know, even obtain) profitability.

The obvious reason for their failure is perhaps because the Macintosh and Windows operating systems already had their foot in the door, and consumers couldn't handle choosing between more than two systems. But I tend to think it had more to do with hardware. GEOS ran on the Commodore 64 when the C64 was already proving itself antiquated, and except for the Commodore 128 (technically, the MOS Technology 6502, till 16-bit GEOS came about for x86) it didn't really have enough hardware on which to install itself, at least not soon enough to matter. The Be operating system, meanwhile, started out on the PowerPC CPU. In fact, it was the true power of the PowerPC that initiated the dream of Be. Be's founder was a former Apple employee who was frustrated with the fact that the Macintosh operating system simply wasn't taking advantage of the hardware, and Be OS was intent on harnessing that power. Meanwhile, PowerPC chips were manufacturered by IBM and Motorola, but they weren't exactly easy to come by on a PC except in Apple Macintosh hardware and proprietary systems. (Said proprietary systems included Be OS's own hardware, the BeBox.) When that proved to limit the Be OS market too heavily, Be OS re-targeted itself to the i386 platform, but by then it was too late; the excitement of a fresh new geeky operating system had waned, and now it was only a novelty.

To be more precise, and accurately agree with the observations of the general public, there was just too little reason for people to switch to Be. Not enough hardware driver support. Not enough software. Windows and Mac met those needs fine. Be had something for everyone, be sure, but by the time corporate funds ran out, it just wasn't nearly enough. Being a closed-source system on a commercial budget, the project simply dried up. An operating system of that magnitude really needs a decade to develop in order to take on the industry.

Then comes along Linus Torvalds. The guy slaps together (okay, that's harsh, he carefully knits together) a Unix kernel that runs on the i386 platform and makes it open source. The Linux operating system is birthed. Suddenly, the UNIX crowd--a huge crowd in the computing realm--have a zero-budget operating system for their microcomputer workstations that is stable, clean, fun, and cool (and isn't FreeBSD).

A decade later, we have a gajillion Linux "distros" (customized distributions of the Linux operating system, typically branded), a whole lot of eye candy, development tools, rediculously commonplace mention in geek and business tabloids, and a ubiquitous following by the non-Microsoft geek community. And if there isn't a day that I don't hear the word "Ubuntu" cross my web browser window at least two or three times per day, it will just be another distro to replace it.

So here I am now with VMWare, playing once again with an instance of Fedora (another Linux distro), and scratching my head wondering why in one day I have had to delete the entire thing and start over three times just to get everything installed and up-to-date when it has been made to be so click-easy. I wonder why I have lost at least three non-consecutive nights at the office to setting up Linux distros at the office, if tools like 'yum' make it so quick and simple.

And of course the answer to my wonderings is consistently the same as it was many years ago when Linux first showed itself in my living room: despite the limitless extent of scripts, quick commands, and distribution GUI tools, it's still the same geek operating system that demands flawless geek sequencing of configuration and implementation. What do you do when RPM dependency trees get out of sync or outta whack? Spend many hours rebuilding it, or start over from scratch. What do you do when the VM gets suddenly reset (because a whole frickin gigabyte of physical RAM didn't get allocated to it) while 'pup' extras are downloading and installing? Either go back and uninstall everything that got (half-)installed and then go back and reinstall everything, or just start over and format the drive. Then there's the matter of the whole stack of libraries for things like mySql. And, oh, the pain of getting MonoDevelop to just install on Fedora 7, with missing GtkSharp libraries leading me to a hell-hole path of looking for GTK+, Pango, GLib, atk, cairo, configure this, make that, oh I give up, please just kill me now!!

In fact, there is a laundry list of reasons why I think this whole operating system is just not the answer for converting Windows developers.

1. The directory structure isn't. It isn't a structure. It's a mess. I mean, what is the *real* difference between, oh, let's see, opt, usr, and var? I realize that some things just start trickling together into the same place, like putting configuration bits into a directory called "etc". But, I mean, for goodness sake.. Putting configuration files into a folder that is named the abbreviation for "etcetera"? Putting system-wide executable programs into a directory that is named after the user ("/usr/bin")? Putting the user's files into a directory called "home"? There is no rhyme or reason to this. It just is. And it is, because the geeks are used to it, they know where to put this stuff like they know how to deal with their siblings. But the so-called structure is maintained by memory and familiarty, not by sensibility.

Both the Macintosh operating systems and Windows operating systems make sense with their system directory structures. Or at least, the Macintosh I once knew with System 6 and System 7 (I haven't seen OS X closely yet.) The operating system was in a directory called "System". Fonts were put into a pseudo-directory (a "suitcase") in the System directory called "Fonts". User documents were put into "Documents", applications into "Applications". And on Windows, it's a bit messier but still makes more sense than opt/var/usr. The operating system files go into "Windows", the name of the operating system. Software applications go into "Program Files" (with a space in the name that has driven us all crazy, I know, but at least it's obvious what it's for). Shared DLLs now go into Program Files, in a subdirectory called "Common Files". User documents and preferences once went into a horrible place called "Profiles" in the Windows directory, but that was moved to "Documents and Settings" in the root directory, and then in Windows Vista it moved again to just "Users". A lot of moving around, but always in the pursuit of sensible structure. And inside the user's profile directory, you have a whole new world, especially in Vista, with organized directory structure, between documents, multimedia files, temp files, app settings, and more. Each directory is sensibly named, no geeky bull. As Microsoft so annoyingly puts it, it's "people-ready".

And don't get me started on how brilliantly simple, perfect, readable, yet sufficiently geeky (terse and lowercase) Be OS's directory naming convention was...

2. It's consistent in things that users don't want to see, and inconsistent in things that they do. So many distros, each one having its own touches. So many window managers, each one having its own capabilities and layouts. But no one is cleaning up the ugly bits. All that stuff is tucked away, swept under the bed, hidden, with a nice GNOME/KDE/other user interface that attempts (and fails, miserably) to shield the novice end user from being exposed from the senseless geekiness that is the UNIX underpinnings of a very old operating system. When the shielding does work, it doesn't work. For instance, Fedora 7 has a GUI configurator for the Apache web server, but if you use it you will break Apache because you'll get two different configuration files in the "etc" directory and between the two of them you'll get two bindings to Port 80. This doesn't just get fixed through the update pipeline (or hadn't when this bug bit me) because Fedora's contributors are busy working on so many other bugs. This is the big problem with having so many distros, all the efforts are forked across a couple hundred repeat efforts to provide a custom solution to the same problem and no one is ultimately accountable for leadership except the volunteers on behalf of a particular instance.

3. It is bound to its legacy. Despite very careful and successful handiwork of incredibly smart software programmers, the ancient operating system has evolved with total support for both legacy and modern architectures. That fact should bring a smile to any Linux geek's face, but it is not a good thing. Think Windows 95. Windows 98. Windows ME. Each of these was an evolution of a really, really poorly architected operating system that seemed as though Microsoft was using bubble gum and tape to expand the operating system's capabilities. (In actuality, bubble gum and tape were not used. Microsoft had a bit more money than that to afford caulk and nails--of the brittle sort. The problem was a broken foundation, that was ultimately replaced with the NT4/2000 reworked codebase.) Linux fortunately has a very stable foundation on which to add all these new evolutionary features, but the support it has for legacy software and the great multitude of development libraries is also the undying foundation that can never become declared as antiquated because so much depends on it.

I'm glad that the Linux foundational underpinnings are constantly improving; I know that the Linux kernel team(s) are working hard to stay up to date to support the latest and greatest support for things like new chipset hardware, multiprocessing (years ago), and now hypervisor support (recent). But it still walks and talks like a penguin--as in, like a geek who knows how to use Emacs and can explain what the big difference is between '/var' and '/usr', or why we use 'init 3' to break out of the GUI. Evolution has come in the form of additional software libraries that run on top of this awkward wave-bouncing boat that miraculously stays afloat without sinking.

4. The revised GUI applications are improving dramatically, but the classic software itself doesn't make sense. Look at text-mode Emacs. If you know how to use Emacs, skip this, but if you don't, look at it. Can you figure out how to use it just by looking at it? Poking at it? No? OK, try "man emacs", or google for help. What's that? Going to take you days to get started? Okay, then, let me know when you can start being productive. I expect to hear from you next week. Hopefully that's not too soon.

5. Without authoritative decision makers, you end up with chaos. And that's exactly what Linux is--organized chaos. Or is it chaotic organization? The stuff "just works", when it does. But wow is it a mess of gobbligook, with no one to account for the mess that it is. With the old Macintosh System 6 / 7 (again, I don't know what OS X looks like), applications were cleanly organized into a self-dependent, fully consolidated application file. Preferences were dropped into a Preferences directory. In Windows, applications are cleanly tucked into "Program Files\Company\App Name" or "Program Files\App Name". Shared libraries are pretty much always .dll's, and DLLs are always either 'C' invokable libraries (which may or may not be COM invokable) or CLR assemblies. App settings go into a pseudo file system, created just for configurations--the Windows registry--and follow standard path conventions like HKCU/Software/[App]/@setting=value. And when software is installed, it must be registered as an installed app and be uninstallable from "Add/Remove Programs" (or "Programs and Features" in Vista). Linux does have RPM, but the dependency trees as well as the potential corruption thereof are too painful to deal with.

A clean design starts with a designer, and consistency with a clean design depends on an authority figure who can sign off on it. With Linux, which has no one designer, you have all sorts of kinds of files, in all kinds of different "languages" (.o, .class, .pl, .bleah), no file typing ("chmod +x bleah"), and can put /any/thing/any/where/.and/yet/it/will/make/sense/to/some/geek. Everything that works, works to the people that established it and to the people who learned it to build dependencies upon it. But the fact that any one thing might work fine doesn't change the fact that the combination of the sum of all of its parts is more akin to a zoo than an organization, which makes it extremely difficult for the average person to adopt.

6. The GUI subsystem (X Window) is hardly performant and is erratic. Macintosh rebuilt its GUI subsystem based on PDF technology for crisp anti-aliasing, yet its down-to-the-metal optimizations make it a clean, fast, and beautiful environment. Microsoft Windows Vista's Aero subsystem takes it to the next level and channels everything through Direct3D, taking advantage of video card optimizations to make the user experience very smooth and responsive. But Linux? From what I can tell, it still pushes everything through the TCP/IP network stack, which is one of the slowest (if most versatile) computer communications channels on an operating system. The advantage is that you can redirect windowing instructions to another machine (like you can with Windows' Remote Desktop) even through SSH, but the down side is that you have limited performance, and you have to minimize the instructions and make the instruction set "smarter" to do things like OpenGL or other graphics-intensive work. In the end, whenever I use Linux locally I feel like I'm using Remote Desktop. The mouse is slow and erratic, and everything feels a few milliseconds slower than me. Everything seems like it's bound by elastic bands. Whereas, when I'm in Windows doing the same things, everything is very responsive; the mouse, in particular, feels flawless and perfectly optimized to bind to my hardware and physical movements. (The Windows kernel seems to manage the mouse in an isolated, high priority system thread, seperate from everything else, which is why no matter how slow other things are, it is always very responsive and true to physical movement.) And I don't suppose I can ever dream of getting VMWare to resize the client OS screen resolution at runtime without unloading X Window and restarting it.

But there are some things I like about Linux.

I like that there is a "clean" command-line environment where administration can be performed without a GUI. I still miss Windows 95/98 being able to "boot into DOS mode", switching straight to MS-DOS 7 on which Windows 95 was still built on top of. Windows NT/2000/XP/2003/Vista forces you to have a useless mouse in your face even if you're booting into "Safe Mode with Command Prompt".

I like the support for significant development languages and tools, like Java and Python. I can learn such things in Windows and deploy to Linux. (Why I would want to if it runs in Windows, though, I don't know, but some stuff I have to support is built on that crazy zoo of a foundation that Linux is, such as Perl and Apache and lots of Linux-specific add-on modules that get dropped into those weird directories.)

I like the geek love that Linux enjoys. Windows doesn't get that love. It only gets hate, from the Linux lovers. Then again, the local user's group for Microsoft technologies definitely enjoys some Windows and .NET love, so nevermind.

I like the network orientation of Linux. It's not the answer, but it does set a precedent.

I like the artistic contributions for the more recent UI elements, like GNOME. I think the fonts are lame, but more for inconsistent display behavior between windows and programs than for the font designs.

But I can totally live without those things. I'd much rather live without those things than have to put up with the mess that Linux is. Let's face it, Linux isn't the open-source answer for a sensible operating system.

The ReactOS project isn't the answer, either. If we wanted Windows, we'll get Windows. We don't need an open-source look-alike of Windows. What this is about is the need for an innovative and original open source operating system that is not bound by legacy constraints.

Be OS came ever so close, if only it was still available and open source! But Haiku OS (not Linux-based, and inspired by Be OS) seems REALLY interesting. Could it be the answer??

With the advent of virtualization with VMWare, Virtual PC, Parallels, and Xen now being commonplace, I can't help but wonder why people in the geek community haven't gone back to the drawing board already to rethink operating systems.

Microsoft gave me a glimmer of hope with their Singularity project. Singularity is a fresh, from-scratch operating system that Microsoft built using a little bit of assembly and C/C++ but mostly a variant of C# all the way down to the metal. It completely drops all ties to legacy support, and instead focuses on the future. It's so multi-threaded oriented that hypervisors are a moot concept. It is Microsoft's opportunity to take two decades of lessons learned with Windows and try to prototype an OS with no legacy constraints and lessons learned now applied. And although it's not tuned for inherent performance (which is why assembly and C++ lovers will inevitably hate the Singularity project idea), it's still performant, and runs circles around modern OS's when it comes to inter-process communications.

Sadly, the Singularity project isn't open-source. That is, Microsoft Research did "open" it up to a few professors in a few select universities, but this isn't an open source operating system intended to be consumed by the general global geek community.

This is why I'm getting a bit excited about the idea that maybe it's time for the geeks to get a clue. VMWare has debugger support to the hardware level and even rewind and replay support. The tools are ripe for dreaming, building, and playing with a whole new operating system, one that is sensibly designed, responsibly organized, and yet open for community contributions. This would be an opportunity for people to learn how to organize and delegate community representatives, to provide the world a fresh, clean, newly design operating system that is free to use, freely consumable, free to break apart, free to extend, but having a single, organized, moderated, carefully planned public "distro", all the way down to the metal, with no legacy constraints, and with sights set for the future. If the open source community really wants to compete with Microsoft, they should pay attention to what Microsoft is doing beside duplicating Windows Explorer with Nautilus, and look at how Microsoft is constantly rethinking and refactoring its core design, from the kernel (dropping Win9x for WinNT) to the directory structures to the windowing subsystem being channeled straight through Direct3D (with Aero).

Believe it, you can have your GNU and OSI (open source initiative), and lose your *nix. It's okay if you do. I'll be cheering you on. Even if I'm the only one on the stands who is doing the wave. But not until we get away from usr/var/etc, and stop taking geeky, old-school, antiquated foundations for granted. They're expensive and they're not worth keeping around.

Mono Needs Help

by Jon Davis 21. July 2007 19:30

The Mono Project over at http://www.mono-project.org/ really needs help, mainly with quality control and distribution maintenance.

For a long time, I have been very glad for Mono, always smiling when I see it mentioned in the press such as SD Times articles or in announcements and Internet buzzes like the one about Moonlight, the Silverlight-to-Mono port project. It has always annoyed me when people would dis the Mono project as being nothing but an intellectual tinkering, something or other, because I thrive in .NET and as much as I admire Linux I need a transitional environment before I can embrace it. I have full confidence in the Mono team's ability to write excellent .NET-compatible code that runs on Linux.

But there are some painful quality control issues going on with Mono, and personally I think it just needs one or two people to jump in and help out, if the team is willing to embrace another helping hand. Otherwise, they need to get their own act together.

The biggest issues I believe are with GTK# and with the Mono-Develop IDE. While the Mono team would likely argue that an IDE does not make up a programming SDK like .NET, one must appreciate the fact that in the Windows world, .NET and Visual Studio are like peanut butter and jelly--they can be eaten on bread alone but the experience is incomplete unless they are together.

I have no problem with Mono or MonoDevelop being incomplete. My issue is with the broken installation process. The whole idea of .NET on the Microsoft front was to eliminate "DLL hell" by allowing folks to both install multiple versions of DLL side-by-side and GAC them, as well as to enable and even encourage software developers to distribute non-GAC'd distributions of libraries that can execute revised functionality for objects being called by an application that expects an older version.

The MonoDevelop solution, or at least the one that is disributed by the Fedora project's Extras repository (yum install monodevelop), installs all of the dependencies, but with incompatible versions of GTK# (et al). So when you fire up MonoDevelop, in addition to getting an error about MonoQuery.addin (not sure what that one's about), if you start a new GTK# project, despite GTKSharp clearly showing up in the References, you get a compile error saying that the Gtk namespace cannot be found.

I have installed Fedora at least five times in the last week or two, in VMWare, trying different yum / rpm installation sequences, trying to figure out where I went wrong. I have reached the conclusion that I wasn't going wrong--the MonoDevelop and/or Mono teams are the ones who did wrong.

One might argue that it's Fedora's problem since they were doing the distro. Wrong again; the Mono project's web site's download links are distribution-targeted, such as for SuSE and Fedora, but the Fedora links are for Fedora 5 (that's TWO major releases old), and are strongly versioned for Fedora 5 when installed. When I use the noarch installer, at the end of installation you get an error message, "it appears that some graphics applications might not run correctly, please install those libraries individually", and MonoDevelop still doesn't "automagically" fix itself.

GAC is overrated. Backward compatibility support for future-versioned libraries inherent in the CLR is as key to Mono's success as side-by-side version installations. This is a fundamental problem with using RPM technology with Mono. You cannot install an older version of GTK# when a newer version is already installed. You can do a force install, but at what cost? What breaks?

I'm still trying to get this stupid thing going. But be sure, I would rather drop Mono than drop Fedora 7 for Fedora 5 or for SuSE (which I also have installed on a VM, and I'm unimpressed with it).

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , ,

Software Development | Linux


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  September 2020  >>
MoTuWeThFrSaSu
31123456
78910111213
14151617181920
21222324252627
2829301234
567891011

View posts in large calendar