I Don’t Much Get Go

by Jon Davis 24. August 2010 01:53

When Google announced their new Go programming language, I was quite excited and happy. Yay, another language to fix all the world’s problems! No more suckage! Suckage sucks! Give me a good language that doesn’t suffer suckage, so that my daily routine can suck less!

And Google certainly presented Go as “a C++ fixxer-upper”. I just watched this video of Rob Pike describing the objectives of Go, and I can definitely say that Google’s mantra still lines up. The video demonstrates a lot of evils of C++. Despite some attempts at self-edumacation, I personally cannot read nor write real-world C++ because I get lost in the gobbligook that he demonstrated.

But here’s my comment on YouTube:

100% of the examples of "why C++ and Java suck" are C++. Java's not anywhere near that bad. Furthermore, The ECMA open standard language known as C#--brought about by a big, pushy company no different in this space than Google--already had the exact same objectives as Java and now Go had, and it has actually been fundamentally evolving at the core, such as to replace patterns with language features (as seen in lambdas and extension methods). Google just wanted to be INDEPENDENT, typical anti-MS.

While trying to take Go in with great excitement, I’ve been forced to conclude that Google’s own message delivery sucks, mainly by completely ignoring some of the successful languages in the industry—namely C# (as well as some of the lesser-used excellent languages out there)—much like Microsoft’s message delivery of C#’s core objectives somewhat sucked by refraining from mentioning Java even once when C# was announced (I spent an hour looking for the C# announcement white paper from 2000/2001 to back up this memory but I can’t find it). The video linked above doesn’t even show a single example of Java suckage; it just made these painful accusations of Java being right in there with C++ as being a crappy language to work with. I haven’t coded in Java in about a decade, honestly, but back then Java was the shiznat and code was beautiful, elegant, and more or less easy to work with.

Meanwhile, Java has been evolving in some strange ways and ultimately I find it far less appetizing than C#. But where is Google’s nod to C#? Oh that’s right, C# doesn’t exist, it’s a fragment of someone’s imagination because Google considers Microsoft (C#’s maintainer) a competitor, duh. This is an attitude that should make anyone automatically skeptical of the language creator’s true intentions, and therefore of the language itself. C# actually came about in much the same way as Go did as far as trying to “fix” C++. In fact, most of the problems Go describes of C++ were the focus of C#’s objectives, along with a few thousand other objectives. Amazingly, C# has met most of its objectives so far.

If we break down Google’s objectives themselves, we don’t see a lot of meat. What we find, rather, are Google employees trying to optimize their coding workflow for previously C++ development efforts using perhaps emacs or vi (Rob even listed IDEs as a failure in modern languages). Their requirements in Go actually appear to be rather trivial. It seems that they want to write quick-and-easy C-syntax-like code that doesn’t get in the way of their business objectives, that performs very fast, and fast compilation that lets them escape out of vi to invoke gcc or whatever compiler very quickly and go back to coding. These are certainly great nice-to-haves, but I’m pretty sure that’s about it.

Consider, in contrast, .NET’s objectives a decade ago, .NET being at the core of applied C# as C# runs on the CLR (the .NET runtime):

  • To provide a very high degree of language interoperability
    • Visual Basic and C++ and Java, oh my! How do we get them to talk to each other with high performance?
    • COM was difficult to swallow. It didn’t suck because its intentions were gorgeous—to have a language-netural marshalling paradigm between runtimes—but then the same objectives were found in CORBA, and that sucked.
    • Go doesn’t even have language interoperability. It has C (and only C) function invocators. Bleh! Google is not in the real world!
  • To provide a runtime environment that completely manages code execution
    • This in itself was not a feature, it was a liability. But it enabled a great deal, namely consolidating QA resources for low-level functionality, which in turn brought about instantaneous quality and productivity on Microsoft’s part across the many languages and the tools because fewer resources had to focus on duplicate details.
    • The Mono runtime can run a lot of languages now. It is slower than C++, but not by a significant level. A C# application, fully ngen’d (precompiled to machine-level code), will execute at roughly 90-95% of C++’s and thus theoretically Go’s performance, which frankly is pretty darn good.
  • To provide a very simple software deployment and versioning model
    • A real-world requirement which Google in its corporate and web sandboxes is oblivious to, I’m not sure that Go even has a versioning model
  • To provide high-level code security through code access security and strong type checking
    • Again, a real-world requirement which Google in its corporate and web sandboxes is oblivious to, since most of their code is only exposed to the public via HTML/REST/JSON/SOAP.
  • To provide a consistent object-oriented programming model
    • It appears that Go is not an OOP language. There is no class support in Go. No objects at all, really. Just primitives, arrays, and structs. Surpriiiiise!! :D
  • To facilitate application communication by using industry standards such as SOAP and XML.
  • To simplify Web application development
    • I really don’t see Google innovating here, instead they push Python and Java on their app cloud? I most definitely don’t see this applying to Go at all.
  • To support hardware independence and portability
    • Although the implementation of this (JIT) is a liability, the objective is sound. Old-skool Linux folks didn’t get this; it’s stupid to have to recompile an application’s distribution, software should be precompiled.
    • Java and .NET are on near-equal ground here. When Java originally came about, it was the silver bullet for “Write Once, Run Anywhere”. With the successful creation and widespread adoption of the Mono runtime, .NET has the same portability. Go, however, requires recompilation. Once again, Google is not out in the real world, they live in a box (their headquarters and their exposed web).

And with the goals of C#,

  • C# language is intended to be a simple, modern, general-purpose, object-oriented programming language.
    • Go: “OOP is cruft.”
  • The language, and implementations thereof, should provide support for software engineering principles such as strong type checking, array bounds checking, detection of attempts to use uninitialized variables, and automatic garbage collection. Software robustness, durability, and programmer productivity are important.
    • Go: “Um, check, maybe. Especially productivity. Productivity means clean code.”
    • (As I always say, the more you know, the more you realize how little you know. Clearly you think you’ve got it all down, little Go.)
  • The language is intended for use in developing software components suitable for deployment in distributed environments.
    • Go: “Yeah we definitely want that. We’re Google.”
  • Source code portability is very important, as is programmer portability, especially for those programmers already familiar with C and C++.
    • Go: “Just forget C++. It’s bad. But the core syntax (curly braces) is much the same, so ... check!”
  • Support for internationalization is very important.
  • C# is intended to be suitable for writing applications for both hosted and embedded systems, ranging from the very large that use sophisticated operating systems, down to the very small having dedicated functions.
    • Go: “Check!”
    • (Yeah, except that Go isn’t an applications platform. At all. So, no. Uncheck that.)
  • Although C# applications are intended to be economical with regard to memory and processing power requirements, the language was not intended to compete directly on performance and size with C or assembly language.

Right now, Go just looks like a syntax with a few basic support classes for I/O and such. I must confess I was somewhat unimpressed by what I saw at Go’s web site (http://golang.org/) because the language does not look like much of a readability / maintainability improvement to what Java and C# offered up.

  • Go supposedly offers up memory management, but still heavily uses pointers. (C# supports pointers, too, by the way, but since pointers are not safe you must declare your code as “containing unsafe code”. Most C# code strictly uses type-checked references.)
  • Go eliminates semicolons as statement terminators. “…they are inserted automatically at the end of every line that looks like the end of a statement…” Sorry, but semicolons did not make C++ unreadable or unmaintainable
    Personally I think code without punctuation (semicolons) looks like English grammar without punctuations (no period)
    You end up with what look like run-on sentences
    Of course they’re not run-on sentences, they’re just lazily written ones with poor grammar
    wat next, lolcode?
  • “{Sample tutorial code} There is no implicit this and the receiver variable must be used to access members of the structure.” Wait, what, what? Hey, I have an idea, let’s make all functions everywhere static!
  • Actually, as far as I can tell, Go doesn’t have class support at all. It just has primitives, arrays, and structs.
  • Go uses the := operator syntax rather than the = operator for assignment. I suppose this would help eliminate the issue where people would type = where they meant to type == and destroy their variables.
  • Go has a nice “defer” statement that is akin to C#’s using() {} and try...finally blocks. It allows you to be lazy and disorganized such that late-executed code that should called after immediate code doesn’t require putting it below immediate code, we can just sprinkle late-executed code in as we go. We really needed that. (Except, not.) I think defer’s practical applicability is for some really lightweight AOP (Aspect Oriented Programming) scenarios, except that defer is a horrible approach to it.
  • Go has both new() and make(). I feel like I’m learning C++ again. It’s about those pesky pointers ...
    • Seriously, how the heck is
       
        var p *[]int = new([]int) // allocates slice structure; *p == nil; rarely useful
        var v  []int = make([]int, 100) // the slice v now refers to a new array of 100 ints

       
      .. a better solution to “improving upon” C++ with a new language than, oh I don’t know ..
       
        int[] p = null; // declares an array variable; p is null; rarely useful
        var v = new int[100]; // the variable v now refers to a new array of 100 ints

       
      ..? I’m sure I’m missing something here, particularly since I don’t understand what a “slice” is, but I suspect I shouldn’t care. Oh, nevermind, I see now that it “is a three-item descriptor containing a pointer to the data (inside an array), the length, and the capacity; until those items are initialized, the slice is nil.” Great. More pointer gobbligook. C# offers richly defined System.Array and all this stuff is transparent to the coder who really doesn’t need to know that there are pointers, somewhere, associated with the reference to your array, isn’t that the way it all should be? Is it really necessary to have a completely different semantic (new() vs. make())? Ohh yeah. The frickin pointer vs. the reference.
  • I see Go has a fmt.Printf(), plus a fmt.Fprintf(), plus a fmt.Sprintf(), plus Print() plus Println(). I’m beginning to wonder if function overloading is missing in Go. I think it is; http://golang.org/search?q=overloading
  • Go has “goroutines”. It’s basically, “go func() { /* do stuff */ }” and it will execute the code as a function on the fly, in parallel. In C# we call these anonymous delegates, and delegates can be passed along to worker thread pool threads on the fly with only one line of code, so yes, it’s supported. F# (a young .NET sibling of C#) has this, too, by the way, and its support for inline anonymous delegate declarations and spawning them off in parallel is as good as Go’s.
  • Go has channels for communication purposes. C# has WCF for this which is frankly a mess. The closest you can get to Go on the CLR as far as channels go is Axum, which is variation of C# with rich channel support.
  • Go does not throw exceptions. It panics, from which it might recover.

While I greatly respect the contributions Google has made to computing science, and their experience in building web-scalable applications (that, frankly, typically suck at a design level when they aren’t tied to the genius search algorithms), and I have no doubt that Google is an experienced web application software developer with a lot of history, honestly I think they are clueless when it comes to real-world applications programming solutions. Microsoft has been demonized the world over since its beginnings, but one thing they and few others have is some serious, serious real-world experience with applications. Between all of the web sites and databases and desktop applications combined everywhere on planet Earth through the history of man, Microsoft has probably been responsible for the core applications plumbing for the majority of it all, followed perhaps by Oracle. (Perhaps *nix and applications and services that run on it has been the majority; if nothing else, Microsoft has most certainly still had the lead in software as a company, to which my point is targeted.)

It wasn’t my intention to make this a Google vs. Microsoft debate, but frankly the fact that Go presentations neglect C# severely causes question to Go’s trustworthiness.

In my opinion, a better approach to what Google was trying to do with Go would be to take a popular language, such as C#, F#, or Axum, and break it away from the language’s implementation libraries, i.e. the .NET platform’s BCL, replacing them with the simpler constructs, support code, and lightweight command-line tooling found in Go, and then wrap the compiler to force it to natively compile to machine code (ngen). Honestly, I think that would be both a) a much better language and runtime than Go because it would offer most of the benefits of Go but in a manner that retains most or all of the advantages of the selected runtime (i.e. the CLR’s and C#’s multitude of advantages over C/C++), but also b) a flop, and a waste of time, because C# is not really broken. Coupled with F#, et al, our needs are quite well met. So thanks anyway, Google, but, really, you should go now.

Currently rated 3.2 by 6 people

  • Currently 3.166667/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: , , , ,

C# | F# | General Technology | Mono | Opinion | Software Development

Nail-biting success with Windows 7 Media Center and CableCard

by Jon Davis 22. June 2009 01:34

I got CableCard working today with Windows 7 Media Center. This is my first CableCard install. The install was not smooth, but it was successful. I'd heard a lot of horror stories about CableCard, but most of these stories were from two years or so ago. I expected the whole matter to be cleaned up by now. It has probably improved a lot, but I was surprised by how bumpy the ride was.

...

COX technicians really, genuinely, deeply hate CableCards. This was the first visit to install CableCard, but the second visit from COX in the last couple weeks where the subject of CableCards came up. These guys acknowledge that the idea behind CableCard is a sound one, but the problem is that they are so different, and the devices are so different, that it's really hard to get a successful install. [I think Microsoft can relate to this sort of experience, being that their software works with most any white box PC on the planet.] Today's installer had a nauseous-looking frown on his face consistently from the moment he climbed the outer stairs to my door until the minute he walked out the door.. although, he had a slight skip in his step when he left. I'm not sure if that's because he was glad the torture was over or if it was because it wasn't another failure.

More at: http://thegreenbutton.com/forums/thread/370299.aspx

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

Electronics | General Technology | Pet Projects

Nine Reasons Why 8GB Is Only Just Enough (For A Professional Business Software Developer)

by Jon Davis 13. February 2009 21:29

Today I installed 8GB on my home workstation/playstation. I had 8GB lying around already from a voluntary purchase for a prior workplace (I took my RAM back and put the work-provided RAM back in before I left that job) but the brand of RAM didn’t work correctly on my home PC’s motherboard. It’s all good now though, some high quality performance RAM from OCZ and my Windows 7 system self-rating on the RAM I/O jumped from 5.9 to 7.2.

At my new job I had to request a RAM upgrade from 2GB to 4GB. (Since it’s 32-bit XP I couldn’t go any higher.) I asked about when 64-bit Windows Vista or Windows 7 would be put on the table for consideration as an option for employees, I was told “there are no plans for 64-bit”.

The same thing happened with my last short-term gig. Good God, corporate IT folks everywhere are stuck in the year 2002. I can barely function at 4GB, can’t function much at all at 2GB.

By quadrupling the performance of your employee's system, you’d effectively double the productivity of your employee; it’s like getting a new employee for free.

If you are a multi-role developer and aren’t already saturating at least 4GB of RAM you are throwing away your employer’s money, and if you are IT and not providing at least 4GB RAM to developers and actively working on adding corporate support for 64-bit for employees’ workstations you are costing the company a ton of money due to productivity loss!! I don’t know how many times I’ve seen people restart their computers or sit and wait for 2 minutes for Visual Studio to come up because their machine is bogged down on a swap file. That was “typical” half a decade ago, but it’s not acceptable anymore. The same is true of hard drive space. Fast 1 Terabyte hard drives are available for less than $100 these days, there is simply no excuse. For any employee who makes more than X (say, $25,000), for Pete’s sake, throw in an extra $1000-$2000 or so more and get the employee two large (24-inch) monitors, at least 1TB hard drive(s) (ideally 4 drives in a RAID-0+1 array), 64-bit Windows Server 2008 / Windows Vista / Windows 7, a quad-core CPU, and 8 GB of some high performance (800+ GHz) RAM. It’s not that that’s another $2,000 or so to lose; it’s that just $2,000 will save you many thousands more dough. By quadrupling the performance of your employee's system, you’d effectively double the productivity of your employee; it’s like getting a new employee for free. And if you are the employee, making double of X (say, more than $50,000), and if your employer could somehow allow it (and they should, shame on them if they don’t and they won’t do it themselves), you should go out and get your own hardware upgrades. Make yourself twice as productive, and earn your pay with pride.

In a business environment, whether one is paid by the hour or salaried (already expected to work X hours a week, which is effectively loosely translated to hourly anyway), time = money. Period. This is not about developers enjoying a luxury, it’s about them saving time and employers saving money.

Note to the morons who argue “this is why developers are writing big, bloated software that suck up resources” .. Dear moron, this post is from the perspective of an actual developer’s workstation, not a mere bit-twiddling programmer—a developer, that is, who wears many hats and must not just write code but manage database details, work with project plans, document technical details, electronically collaborate with teammates, test and debug, etc., all in one sitting. Nothing in here actually recommends or even contributes to writing big, bloated software for an end user. The objective is productivity, your skills as a programmer are a separate concern. If you are producing bad, bloated code, the quality of the machine on which you wrote the code has little to nothing to contribute to that—on the contrary, a poor developer system can lead to extremely shoddy code because the time and patience required just to manage to refactor and re-test become such a huge burden. If you really want to test your code on a limited machine, you can rig VMWare / VirtualPC / VirtualBox to temporarily run with lesser RAM, etc. You shouldn’t have to punish yourself with poor productivity while you are creating the output. Such punishment is far more monetarily expensive than the cost of RAM.

I can think of a lot of reasons for 8+ GB RAM, but I’ll name a handful that matter most to me.

  1. Windows XP / Server 2003 alone takes up half a gigabyte of RAM (Vista / Server 2008 takes up double that). Scheduled tasks and other processes cause the OS to peak out at some 50+% more. Cost: 512-850MB. Subtotal @nominal: ~512MB; @peak: 850MB
  2. IIS isn’t a huge hog but it’s a big system service with a lot of responsibility. Cost: 50-150. Subtotal @nominal: ~550MB; @peak 1GB.
  3. Microsoft Office and other productivity applications should need to be used more than one at a time, as needed. For more than two decades, modern computers have supported a marvelous feature called multi-tasking. This means that if you have Outlook open, and you double-click a Microsoft Word attachment, and upon reading it you realize that you need to update your Excel spreadsheet, which in your train of thought you find yourself updating an Access database, and then you realize that these updates result in a change of product features so you need to reflect these details in your PowerPoint presentation, you should have been able to open each of these applications without missing a beat, and by the time you’re done you should be able to close all these apps in no more than one passing second per click of the [X] close button of each app. Each of these apps takes up as much as 100MB of RAM, Outlook typically even more, and Outlook is typically always open. Cost: 150-1GB. Subtotal @nominal: 700MB; @peak 2GB.
  4. Every business software developer should have his own copy of SQL Server Developer Edition. Every instance of SQL Server Developer Edition takes up a good 25MB to 150MB of RAM just for the core services, multiplied by each of the support services. Meanwhile, Visual Studio 2008 Pro and Team Edition come with SQL Server 2005 Express Edition, not 2008, so for some of us that means two installations of SQL Server Express. Both SQL Server Developer Edition and SQL Server Express Edition are ideal to have on the same machine since Express doesn’t have all the features of Developer and Developer doesn’t have the flat-file support that is available in Express. SQL Server sitting idly costs a LOT of CPU, so quad core is quite ideal. Cost: @nominal: 150MB, @peak 512MB. Subtotal @nominal: 850MB; @peak: 2.5GB. We haven’t even hit Visual Studio yet.
  5. Except in actual Database projects (not to be confused with code projects that happen to have database support), any serious developer would use SQL Server Management Studio, not Visual Studio, to access database data and to work with T-SQL tasks. This would be run alongside Visual Studio, but nonetheless as a separate application. Cost: 250MB. Subtotal @nominal: 1.1GB; @peak: 2.75GB.
  6. Visual Studio itself takes the cake. With ReSharper and other popular add-ins like PowerCommands installed, Visual Studio just started up takes up half a gig of RAM per instance. Add another 250MB for a typical medium-size solution. And if you, like me lately, work in multiple branches and find yourself having to edit several branches for different reasons, one shouldn’t have to close out of Visual Studio to open the next branch. That’s productivity thrown away. This week I was working with three branches; that’s 3 instances. Sample scenario: I’m coding away on my sandbox branch, then a bug ticket comes in and I have to edit the QA/production branch in an isolated instance of Visual Studio for a quick fix, then I get an IM from someone requesting an immediate resolution to something in the developer branch. Lucky I didn’t open a fourth instance. Eventually I can close the latter two instances down and continue with my sandbox environment. Case in point: Visual Studio costs a LOT of RAM. Cost @nominal 512MB, @peak 2.25GB. Subtotal @nominal: 1.6GB; @peak: 5GB.
  7. Your app being developed takes up RAM. This could be any amount, but don’t forget that Visual Studio instantiates independent web servers and loads up bloated binaries for debugging. If there are lots of services and support apps involved, they all stack up fast. Cost @nominal: 50MB, @peak 750MB. Subtotal @nominal: 1.65GB; @peak: 5.75GB.
  8. Internet Explorer and/or your other web browsers take up plenty of RAM. Typically 75MB for IE to be loaded, plus 10-15MB per page/tab. And if you’re anything like me, you’ll have lots and lots and LOTS of pages/tabs by the end of the day; by noon I typically end up with about four or five separate IE windows/processes, each with 5-15 tabs. (Mind you, all or at least most of them are work-related windows, such as looking up internal/corporate documents on the intranet or tracking down developer documentation such as API specs, blogs, and forum posts.) Cost @nominal: 100MB; @peak: 512MB. Subtotal @nominal: 1.75GB; @peak: 6.5GB.
  9. No software solution should go untested on as many platforms as is going to be used in production. If it’s a web site, it should be tested on IE 6, IE 7, and IE 8, as well as current version of Opera, Safari 3+, Firefox 1.5, Firefox 2, and Firefox 3+. If it’s a desktop app, it should be tested on every compatible version of the OS. If it’s a cross-platform compiled app, it should be tested on Windows, Mac, and Linux. You could have an isolated set of computers and/or QA staff to look into all these scenarios, but when it comes to company time and productivity, the developer should test first, and he should test right on his own computer. He should not have to shutdown to dual-boot. He should be using VMWare (or Virtual PC, or VirtualBox, etc). Each VMWare instance takes up the RAM and CPU of a normal system installation; I can’t comprehend why it is that some people think that a VMWare image should only take up a few GB of hard drive space and half a gig of RAM; it just doesn’t work that way. Also, in a distributed software solution with multiple servers involved, firing up multiple instances of VMWare for testing and debugging should be mandatory. Cost @nominal: 512MB; @peak: 4GB. Subtotal @nominal: 2.25GB; @peak: 10.5GB.

Total peak memory (64-bit Vista SP1 which was not accounted in #1): 11+GB!!!

Now, you could argue all day long that you can “save money” by shutting down all those “peak” processes to use less RAM rather than using so much. I’d argue all day long that you are freaking insane. The 8GB I bought for my PC cost me $130 from Dell. Buy, insert, test, save money. Don’t be stupid and wasteful. Make yourself productive.

Windows 7 Beta first Impressions

by Jon Davis 14. January 2009 04:47

Everyone has already made Windows 7 first impression comments, but I had to see Windows 7 for myself, as I always do wth Windows pre-releases. So here are my first experiences. I tried the earlier PDC release, downloaded from a torrent, but I got an error after booting from the DVD saying that it could not locate an installer file.

Windows could not collect information for [OSImage] since the specified image file [install.wim] does not exist.

I chalked it up to a bad torrent download and tossed the copy.

Then Microsoft released Beta 1 this month. I tried downloading this torrent again, and the download was inturrupted. I tried to restore the download process and no seeds were found after hours. I found another torrent, and after about half a day and half-downloaded I realized Microsoft had actually released this version to the open public for anyone to download, so deleted that torrent and started download again, this time straight from Microsoft.

The next day, the download having been completed while I was sleeping, I burned it to DVD-RW and gave it a run. Guess what?

Windows could not collect information for [OSImage] since the specified image file [install.wim] does not exist.

Oh, poop. So the original download wasn't any more flawed on this part than this one is, it's something else.

I tried booting the DVD in VMWare on another PC, and it worked! Aha! It's a hardware problem, perhaps a DVD driver problem. My computer is only about one and a half years old, but the DVD drive is about four years old. I Googled around a bit for more information on this ridiculous error, and the only advice I could find were two suggestions:

  1. Some commented, "You probably found an old DVD-RW from behind a sofa. Use a new DVD-R and that'll fix it right up." Hm. Doubtful. I burned another DVD-RW (same brand, roughly the same condition) and this time I checked off the "Verify" option on my burner software, and it checked out. Still got the error. It was at this point that I tried it on VMWare, and it got past this error, so no, it's not a bad disc. I suppose it could have to do with the failure of the other drive, on the other PC, to read the disc, though. In other words, the drive might have failed, not the disc.
  2. Someone said, "I was using an old USB-attached DVD drive that the BIOS enabled me to boot the disc from, but after installing an IDE-based DVD drive in the actual computer the error went away." Well that stinks, because I'm using an IDE-based DVD drive, it's never given me any problems except that it often refuses to burn discs.

So I pondered, I'm leaning towards the #2 scenario as a clue, I know Microsoft was trying to thin down the core surface area in Windows 7 and I bet this is a lack of some drivers for my drive. But I wonder if "new" is the keyword here, not the form (IDE vs USB).

I just happened to have a external USB-based DVD drive I recently purchased at Amazon. USB, but new. Could it work? I ran to the back room and grabbed it, brought it back in, stretched it across the room to the outlet, configured the BIOS to boot from USB, and booted the Windows 7 DVD. I went to install and....... yes!! It got past the error.

So here's the first first impression: While I greatly appreciate Microsoft's attempt to slim down the core dependency set of Windows and its drivers set, in this area (CD/DVD drive support) they chopped off WAY too much. Perhaps driver support isn't the issue here, but if it is, this IS a bug. There are a LOT of people who were power users 4 years ago, who invested in the latest and greatest back then, and had no Windows version but XP, and were reluctant to switch to Vista because of the corners that Windows 7 rounded out. These years-old systems are more than adequate, surely, for Windows 7 performance-wise, but the CD/DVD drivers are right there along with USB subsystem and SATA as being most needed for success. Fix this, guys, this is a BUG, not a mere risky compromise (intentional droppage of legacy hardware support). Microsoft can't afford to lose THIS hardware.

I experienced no other hardware glitches, fortunately, and even my audio hardware was working, and the Aero experience working right from post-setup first boot. There was only one other hardware-related annoyance, and that is that my two monitors were backwards.. I had to mouse far to the right to access the left monitor. Yes, this is configurable with the Control Panel, but I got annoyed watching setup and dealing with dialog boxes, etc., while everything was backwards and the setup didn't have the Control Panel available to me. It would've been nice, I suppose, if there was one optional button during setup that brought up the Monitors dialog, but at least the Monitors dialog isn't accessed through the wholly inappropriately named (in Vista's time) "Personalization" dialog, which was SO ridiculously placed since monitor setup (resolution, monitor placement monitor drivers, color depth, etc) has little to nothing to do with personalization. Might as well rename Control Panel to "Personalizations".. but they got it, I'm glad.

The new Windows 7 is all about rounding off the corners and adding the polishing touches that Windows Vista only touched on and inspired.

  1. More ever-present Aero Glass experience, with lots of smooth animations and roll-overs.
  2. Explorer.exe got a huge overhaul with Aero and usability enhancements.
    • As is very well known, the ubiquitous taskbar that has been around through Windows 95, Windows NT 4, Windows 98, Windows ME, Windows NT, Windows 2000, Windows XP, Windows Server 2003, Windows Vista, and Windows Server 2008 (did I miss one somewhere? surely I did ..) is now no more. There is no longer a taskbar. There is a bar down there, but it's more like a "smartbar"; the Quick Launch toolbar and the taskbar have sorta merged. It's all very much inspired, no doubt, by the Mac OS X's dock, which frankly disgusts me. But so far I don't have a hatred of the Windows 7 smartbar thingmajig. I do very strongly believe that someone (i.e. Stardock), if not Microsoft themselves, will be pushing a "Windows Vista taskbar" as an add-on accessory to Windows 7, for those people who preferred it, as there is now a rather obvious market for it.
    • The awesome feature in the Windows Vista desktop compositing system that enabled Direct3D and high definition video to be managed in an already D3D desktop environment, advantages of which were only slightly touched upon by Windows key + Tab and taskbar mouseover tooltip previews, both showing these windows re-displayed in distorted, small form in realtime with no performance loss, has been expanded upon in Windows 7. I'm still discovering these, but the most obvious feature is the smartbar mouseover with Internet Explorer showing each tab and letting you pick a tab as it is rendered in real-time. I hope to find a lot more such scenarios
  3. Paint, Calculator, and Wordpad have finally been rewritten with an Office 2007 feel. We no longer have to puke on the Windows 95 versions. I didn't see if Notepad was replaced with something anywhere near the simplicity yet completeness of Notepad2. But I doubt Notepad was touched, which if not is a shame. But at least there's always Notepad2. *cough*
  4. In general, the things in Windows such as in the Control Panel that got moved around a lot in Vista and that everyone complained about, such as me complaining about Monitor settings showing up under stupid Personalization, have been rearranged again. Generally, things are just better and more thought out. Vista was a trial run in this matter, Windows 7 beta is just more thought through. There are still quirky "features" but nothing I've found so far that is just blaringly wrong. I do think that the personalization bits are now too broken apart but this might just be a style issue that needs some getting used to. Microsoft seems to be leaning more than before towards the Apple/Mozilla approach of pursuing minimalist options while burying advanced features down in an obvious "Advanced" click-trail. Themes are consolidated sets now, a little more like Win95 Plus! themes in the sense of consolidation, and not so much isolated background, color, and sound options. But those options as individual settings are still there. In fact, Sounds is now (finally) a personalization configuration, as it should be.
  5. You start off with a big fish. Literally. It's a nice painting (of a fish). But come on. It's a fish! I went to choose a different background image, and, while I could very possibly be mistaken, I think the number of background images you can choose from has been slashed by half since Vista, and the new offerings in the theme picker don't look as good. Boooo!
  6. Other people ran the numbers so I didnt do any testing, but the general consensus is that Windows 7 performs closer to Windows XP's performance than Windows Vista's performance. (Read: It's very performant.)
  7. The max system rating has been nudged up from 5.9 to 7.9. My score of 5.7 on Windows Vista went up to 5.9 in Windows 7... but given the max of 7.9 my year-and-a-half old PC is no longer 0.2 from ceiling. *sob*
  8. I was impressed that the color palettes across all themes, just like IE 8 beta on Vista, are way too bright. It's ugly and uncomfortable. It's not easily configurable to make darker, either.
  9. I haven't stressed Windows 7 yet with software to see how stable it is, but one of the first apps I downloaded was Google Chrome and that puked. All of Windows froze up while I was doing something else, too, but I don't remember what it was, and that sort of thing is something I'd expect from a Beta. 

I have one other complaint. Windows Vista and Office 2007 introduced some really nice glow animations on buttons. Windows 7 pushes the Office 2007 glow animations and transition animations everywhere. The new smartbar (taskbar replacement) has a really, really cool "you just clicked me!" gradient animation that is almost magical. It's nice, but the animations are so slow they're actually rather obnoxious. For example, in the new Calculator, if you simply hover over and click on a button, yeah, blue-gray turns amber, but mouse-away and it seems to take a full three or four seconds for it to animate back to the original color. It's artistically nice, but it's just too long, and I think it will be too distracting. It might actually produce some serious usability issues, fast-moving users are going to be forced to slow down because their "feedback loop" they're getting on the screen is going to all be just a big blur. I really don't like that. It's already making me a little nauseous. Weird huh.

I think Vista's close/maximize/minimize effects the animation timings just right in this matter. Office 2007 ribbon buttons were just over the edge in my taste (too slow), and I could be wrong but Windows 7 in various places feels like it tripled the Office 2007 animation timings (very, very slow).

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

Computers and Internet | General Technology | Operating Systems

SD Cards' Capacities Are Exploding

by Jon Davis 8. July 2008 21:16

Two years ago I really went out and splurged and got myself a halfway decent 6MP digital SLR camera, lenses, a travel case, and the largest capacity SD (Secure Digital) RAM card I could get at Best Buy. Altogether, I spent somewhere on or around $1,000. The SD card was 2GB. It was about $100 at the time, as far as I can remember. But meanwhile I've found myself using it from time to time like a tiny fingertip-held floppy disk.

One year ago, solid state drives started to take off, and 32GB capacities were about the biggest you could get. Anything more than that costed about $1,000.

Now, 32GB SD cards are available at Amazon.com for less than $300. It's this tiny little thing, and yet it has room enough to dual-boot the full installations of Vista and Linux with complete suite of developer tools. So then the questions become, do BIOS's support booting from these things? And, shouldn't these be the new solid state standard?

Might be a speed issue. "15MB" means that it transfers data at only 15MB/s, so perhaps therein lies the problem. 52x CD-ROM drives transfer at around 64 megabits per second (or 8 megabytes per second) so this transfer rate is only about double the speed of a standard high-speed CD-ROM drive. Whereas, a SanDisk SSD5000 Solid State Drive (64GB) has a transfer rate of 121MB/s, which is about eight times as fast as this little SD card.

Even so, the geek in me wonders how far one can go with shrinking UMPCs with technologies like this.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

General Technology | Computers and Internet

How My Microsoft Loyalty Is Degrading

by Jon Davis 7. June 2008 16:59

I've sat in this seat and often pronounced my discontent with Microsoft or a Microsoft technology, while still proclaiming myself to be a Microsoft enthusiast. Co-workers have often called me a Microsoft or Windows bigot. People would even give me written job recommedations pronouncing me as "one who particularly knows and understands Microsoft technologies".

But lately over the last year or two I've been suffering from malcontent, and I've lost that Microsoft spirit. I'm trying to figure out why. What went wrong? What happened?

Maybe it was Microsoft's selection of Ray Ozzie as the new Chief Software Architect. Groove (which was Ozzie's legacy) was a curious beast, but surely not a multi-billion-dollar revenue product, at best it was a network-based software experiment. Groove's migration to Microsoft under the Office umbrella would have been a lot more exciting if only it was quickly adopted into the MSDN vision and immediately given expansive and rich MSDN treatment, which it was not. Instead, it was gradually rolled in, and legacy SDK support just sort of tagged along or else "fell off" in the transition. Groove was brought in as an afterthought, not as a premier new Microsoft offering. Groove could have become the new Outlook, a rich, do-it-all software platform that brought consolidation of the team workflows and data across teams and diperate working groups, but instead it became just a simple little "IM client on steroids and then some" and I quickly abandoned it as soon as I discovered that key features such as directory sharing weren't supported on 64-bit Windows. So to bring Ozzie in and have him sit in that chair, and then have that kind of treatment of Ozzie's own Groove--Groove being only an example but an important, symbolic one--really makes me think that Microsoft doesn't know what on earth it's doing!! Even I could have sat in that chair and had a better, broader sense of software operations and retainment of vision, not that I'm jealous or would have pursued that chair. The day I heard Ozzie was selected, I immediately moaned, "Oh no, Microsoft is stuck on the network / Internet bandwagon, and has forgotten their roots, the core software platforms business!!" The whole fuzzy mesh thing that Microsoft is about to push is a really obvious example of where Microsoft is going as a result of bringing in Ozzie, and I hardly find a network mesh compelling as a software platform when non-Microsoft alternatives can so easily and readily exist.

Maybe it's Microsoft's audacity to abandon their legacies in their toolsets, such as they have done with COM and with VB6. There still remains zero support for easily building COM objects using the Visual Studio toolsets, and I will continue to grumble about this until an alternative component technology is supported by Microsoft that is native to the metal (or until I manage to get comfortable with C/C++ linked libraries, which is a skill I still have to develop 100% during my spare time, which is a real drag when there is no accountability or team support). I'm still floored by how fast Microsoft abandoned DNA for .NET -- I can completely, 100% understand it, DNA reached its limits and needed a rewrite / rethink from the bottom up, but the swappage of strategies is still a precedent that leaves me with a bad taste in my mouth. I want my personal investments in software discovery to be worth something. I'm also discouraged--the literal sense of the word, I'm losing courage and confidence--by the drastic, even if necessary, evolutionary changes Microsoft keeps doing to its supported languages. C# 2 (with stuff like generics support) is nothing like C# 1, and C# 3 (with var and LINQ) is nothing like C# 2. Now C# 4 is being researched and developed, with new support for dynamic language interop (basically, weak typing), which is as exciting as LINQ was, but I have yet to adopt even LINQ, and getting LINQ support in CLR object graphs is a notorious nightmare, not that I would know but everyone who tries it is pronouncing it as horrible and massive. Come to think of it, it's Microsoft's interop strategy that has been very frustrating. COM is not Remoting, and Remoting is not WCF. WCF isn't even supported in Mono, and so for high performance, small overhead interprocess communications, what's the best strategy really? I could use WCF today but what if WCF is gone and forgotten in five years?

Maybe it's the fact that I don't have time to browse the blogs of Microsoft's developer staff. They have a lot of folks over there, and while it's pretty tempting to complain that Microsoft "codes silently in a box", the truth is that there are some pretty good blogs being published from Microsofties, such as "If broken it is, fix it you should", albeit half of which I don't even understand without staring at it for a very long time. Incidentally, ScottGu does a really good job of "summing up" all the goings on, so thumbs-up on that one.

I think a lot of my abandonment of loyalty to Microsoft has to do with the sincerity of my open complaint about Internet Explorer, how it is the most visible and therefore most important development platform coming from Redmond but so behind-the-times and non-innovative versus the hard work that the Webkit and Mozilla teams are putting their blood, sweat, and tears over, that things like this [http://digg.com/tech_news/Time_breakdown_of_modern_web_design_PICTURE] get posted on my wall at the office, cheerily.

Perhaps it's the over-extended yet limited branding Microsoft did with Vista, making things like this [http://reviews.cnet.com/8301-13549_7-9947498-30.html] actually make me nearly shed a tear or two over what Windows branding has become. That Windows Energy background look looks neat, but it's also very forthright and "timestamped", kind of like how disco in the 70's and synth-pop in the 80's were "timestamped", they sounded neat in their day but quickly became difficult to listen to. That's what happens with too strong of an artistic statement. Incidentally, Apple's Aqua interface is also "timestamped", but at least it's not defaulting with a strong artistic statement plastered all over the entire screen. I like the Vista taskbar, but what's up with the strict black, why can't that or other visual aspects be tweaked? At least it's mostly-neutral (who wants a bright blue or yellow taskbar?), but it's still just a bit imposing IMO.

I'll bet it has to do with the horrifying use of a virtualized Program Files directory in Windows Server 2008 where the practice was unannounced. This is the sort of practice that makes it VERY difficult to trust Microsoft Windows going forward at all. If Windows is going to put things in places that are different from where I as a user told them to be placed, then we have a behavioral disconnect--software should exist to serve me and do as I command, not to protect me from myself while deceiving me.

At the end of it all, I think my degrading sense of loyalty could just be the simple fact that I really am trying to spread out and discover and appreciate what the other players are doing. I mentioned before that Mac OS X is still the ultimate, uber OS, but now that I have it, I confess, I had no idea. Steve Jobs is brilliant, and it's also profound how much of OS X is open source, basically all of the UNIXy bits, which says a lot about OSS. Mind you, parts of the Mac I genuinely do not like and have never liked, such as the single menubar, which violates very key and important rules for UI design. I also generally find it difficult to manage multiple applications running at once, for which I much prefer the Windows taskbar over the Dock if only because it's more predictable, and although it violates UI principles I prefer Alt+Tab for all windows rather than Command+Tab just for applications because every window is its own "workflow" regardless of who owns it. But, among other things, building on PostScript for rendering, for example, was a fantastic idea; on the other hand, Microsoft's ClearType would have been difficult to achieve if Windows used PostScript for rendering. Anyway, meanwhile, learning and exposing myself to UNIX/Linux based software is good for me as a growing software developer, and impossible to cleanly discover in Windows-land without using virtual machines.

In other words, the only way one can spread out and discover the non-Microsoft ways of doing things, and appreciate the process of doing so, is to stop swearing by the Microsoft way to begin with, and approach the whole thing with an open mind. In the end, the Microsoft way may still prove to be the best, but elimination of bias (on both sides) is an ideal goal to be achieved before pursuing long-term personal growth in software.

Currently rated 3.9 by 10 people

  • Currently 3.9/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags:

General Technology | Operating Systems | Software Development | Microsoft Windows | Mac OS X

The Down Side of the Mac Mini

by Jon Davis 22. May 2008 23:23

I've only been blogging in the context of late night fun lately, so bear with me, someday soon I'll get bored by all this and "get back to work" here on my blog.

So I got Vista installed on Boot Camp to get some experience with native Windows on my Mac Mini hardware. My hardware was the best that Apple offered in Mini form (2GHz + 2GB + 160GB). Here are the Vista performance assessment results, needless to say I'm unsurprised yet nonetheless still disappointed by the horrible graphics card on this thing. But it's still a decent little machine for its size.

Numbers are on a scale of 0-to-5.9, not 0-to-10.

Incidentally, a buddy of mine told me that he has a friend who bought an Apple TV and hacked it and it literally became a Mac Mini running Mac OS X. You can get a new Mac this way for ~$250-300!! I'd reconsider this Mac Mini if the AppleTV didn't have its limitations in display output ports.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

General Technology | Computers and Internet

Mac OS X 10.5 (Leopard) + VMWare Fusion + Mono = Bliss

by Jon Davis 17. May 2008 15:13

I have been using my new Mac Mini for less than 24 hours and it already looks like this:

In the screenshot I have VMWare Fusion with Unity enabled so that I have the Windows Vista Start menu (I can toggle off the Start menu's visibility from VMWare itself) and Internet Explorer 7. (I also have Visual Studio 2008 installed in that virtual machine). Next to Internet Explorer on the left is Finder which is showing a bunch of the apps I have installed, including most of the stuff at http://www.opensourcemac.org/. On the right I have MonoDevelop where I can write C# or VB.NET applications for the Mac, for Linux, or for Windows. And of course, down below I have the Dock popped up because that's where my arrow actually is.

I also, obviously, have an Ubuntu VM I can fire up any time I want if I want to test something in Linux. 

Mac OS X 10.5 (Leopard) comes with native X11, not out of the box but with the installer CD, and it's the first OS X build to do so (previous versions used or required XFree86).

This point in time is a particularly intriguing milestone date for the alignment of the moons and stars for blissful cross-platform development using the Mac as a central hub of all things wonderful:

 

  • X11 on Mac OS X 10.5
  • MonoDevelop 1.0 is generally gold (released, it's very nice)
  • System.Windows.Forms in Mono is API-complete
  • VMWare Fusion's Unity feature delivers jaw-dropping, seamless windowing integration between Windows XP / Vista and Mac OS X. And to make things even more wonderful, VMWare Fusion 2, which comes with experimental DirectX 9 support, will be a free upgrade.
  • For game developers, the Unity game engine is a really nice cross-platform game engine and development toolset. I have a couple buddies I'll be joining up with to help them make cross-platform games, something I always wanted to do. This as opposed to XNA, which doesn't seem to know entirely what it's doing and comes with a community framework that's chock full of vaporware. (But then, I still greatly admire XNA and hope to tackle XNA projects soon.)
  • The hackable iPhone (which I also got this week, hacked, and SSH'd into with rediculous ease), which when supplemented with the BSD core, is an amazing piece of geek gadgetry that can enable anyone to write mobile applications using open-source tools (I'd like to see Mono running on it). The amount of quality software written for the hacked iPhone is staggering, about as impressive as the amount of open source software written for the Mac itself. Judging by the quantity of cool installable software, I had no idea how commonplace hacked iPhones were.
  • Meanwhile, for legit game development, the Unity 3D game engine now supports the iPhone and iPod Touch (so that's where XNA got the Zune support idea!) and the iPhone SDK is no longer just a bunch of CSS hacks for Safari but actually binary compile tools.

 

WebKit Is Usable By End Users?

by Jon Davis 3. May 2008 18:20

I've been hearing a lot about WebKit being on the bleeding edge of staying up-to-date with performance and passing various tests like ACID 3. I was confused and concerned, though, because I had thought WebKit was only available to developers as a set of components (DLLs) and was not actually usable by end users.

I was sort of right, but mostly wrong. WebKit's nightly build, which is downloadable, runs on top of Safari (from a user perspective, that is .. technicaly, Safari sits on top of WebKit), replacing Safari's rendering engine with the latest "new and improved". After Safari 3.1 is fully installed, just download the latest nightly build of WebKit, run the batch file and go. (There were two batch files, I ran run-drosera.cmd and then I added a shortcut to run-nightly-webkit.cmd to my Quick Launch toolbar and changed the icon.)  WebKit does not kill off the official Safari renderer when Safari is launched in its normal fashion, it only overrides its renderer when launched from WebKit's .bat file.

Now I'm starting to think that Safari on the latest WebKit is the best browser. *gasp* Who'da thunk?

Currently rated 2.5 by 2 people

  • Currently 2.5/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

Open Source | General Technology | Computers and Internet | Web Development

Bank ATM Innovations?

by Jon Davis 20. April 2008 21:57

When I moved to Scottsdale, I was introduced to automated check-out counters, where I scan my items in, including produce, then pay with my card, and then I leave, no human interaction required. (Someone is always watching, though, 4 counters at a time.)

But I was just blown away today by my bank's ATM machine (at Chase Bank). Very seldom do ATM machines impress me. Bank ATMs are usually 5 to 10, sometimes 20 years behind the times when it comes to technology. Usually banks just give their machines facelifts--prettier graphics, crisper displays, and a few more words and details on the displays. But apparently over the last few weeks my bank replaced their ATM machine with a "new and improved" machine that, frankly, raises the bar about five notches.

Even Smarter Than A Human Teller

I rarely deposit checks, but when I did, I used to kinda cheat. I'd fill out the deposit slip, leaving the checking account number blank (because I didn't know it, and I didn't have a blank checkbook anywhere handy when going to the bank), and then I'd stuff it in the envelope and stuff it into the ATM. I'd always hear some dot matrix printer pounding something out on the envelope, so I always figured it printed my account number on the envelope. The deposits would always clear when I deposted this way, even though soon after I started doing this I started seeing a big sign (which I ignored) taped up above the ATM saying, "To minimize the chances of delays processing your deposit, please include your checking account number with your deposit." I actually didn't care about delays. And when I go in in person, they have no problem with looking up the checking account number themselves (using my ATM card).

Well today, apparently the bank figured out how to deal with troublemakers like me. They used technology.

I inserted my card, and they said, "Insert your check(s) into the slot. You can stack up to 20 checks at a time." Wha?! Insert my check(s)?! No deposit slip?? No envelope?! I stuck my check into the slot and it sucked it in. (It rejected it at first, but I had it in backwards.)

The machine then proceeded to scan my check, OCR'd the value of the check, displayed the scanned image to me, and asked me if it determined the value correctly! There was the check I just deposited, looking right back at me on the screen, with "Please confirm that the value of this check is $__, YES/NO".

As if to add a cherry on top of my sundae, when I was finished, it printed the scanned image of the check onto my receipt!!

If banks keep this up, they just might make banking funner than my deposits' value.

Be the first to rate this post

  • Currently 0/5 Stars.
  • 1
  • 2
  • 3
  • 4
  • 5

Tags: ,

General Technology


 

Powered by BlogEngine.NET 1.4.5.0
Theme by Mads Kristensen

About the author

Jon Davis (aka "stimpy77") has been a programmer, developer, and consultant for web and Windows software solutions professionally since 1997, with experience ranging from OS and hardware support to DHTML programming to IIS/ASP web apps to Java network programming to Visual Basic applications to C# desktop apps.
 
Software in all forms is also his sole hobby, whether playing PC games or tinkering with programming them. "I was playing Defender on the Commodore 64," he reminisces, "when I decided at the age of 12 or so that I want to be a computer programmer when I grow up."

Jon was previously employed as a senior .NET developer at a very well-known Internet services company whom you're more likely than not to have directly done business with. However, this blog and all of jondavis.net have no affiliation with, and are not representative of, his former employer in any way.

Contact Me 


Tag cloud

Calendar

<<  December 2020  >>
MoTuWeThFrSaSu
30123456
78910111213
14151617181920
21222324252627
28293031123
45678910

View posts in large calendar