Sunday, September 26, 2010

On Software and Music - In With the Old

As one of the many software developers who are also musicians, I have always been fascinated by the frequent bundling of the gifts and passions for these two endeavors and what they have in common. I have always said that writing code and writing music feel a lot like the same process to me, like they're using the same parts of the brain. Music and code are certainly similar in many ways. Both need structure and coherence. Each work must be unique in some way, or it is meaningless. Both must follow some set of rules. What set of rules to follow is a creative choice. Sometimes you can even make up your own rules, but failure to follow them will pose a threat to the cohesion of the work. Once the rules are established, you may occasionally, carefully, and mindfully, make some real magic by breaking them.

Looking at these similarities causes me to wonder about their differences. If the creative processes are so similar, what about the products thereof? One difference I notice is that music seems to be much more durable than code. The software world is in so many ways all about "the new hotness." The music world also has this element, but old music is much more present in the world than old code. Not that old code doesn't have its own nostalgic place our hearts. This is the magic of Mame and cool projects like the AppleSoft BASIC Interpreter in JavaScript and FC64. It's why people buy Donkey Kong machines on eBay. The musical side of this nostalgia would be listening to Van Halen I or Synchronicity or Frampton Comes Alive or whatever you remember listening to as a kid.

But what about the work that we consider to be truly significant? In the music world we still study Stravinsky, Mendelssohn, Beethoven, Mozart, Bach, Gabrieli, even Gregorian chants, not as nostalgia, but as work that is still relevant and valuable today. Where is this reverence for history in software? What is the difference? Is it in the platform evolution? Each new computing device to hit the market seems to render last month's model instantly obsolete. The arsenal of musical instruments over the years has progressed more by expansion than by evolution. The new does not generally displace the old. We've added saxophones, steel-stringed guitars, drum kits, electric pianos, electric basses, synthesizers, and on and on, but the symphony orchestra still looks pretty much like it did three hundred years ago.

This is one of the things that I find so fascinating about all the recent movement in the field of functional programming. It's old! LISP was developed in the 1950s, and we're studying this approach today not because it tells where we've come from, or because we have fond childhood memories of it, but because it is valuable to us right now. I have never seen this happen before in this field. Maybe it's just because I'm getting old, but I'm intrigued and excited to see "the new hotness" can be something that is older than I am.

Tuesday, September 21, 2010

We Want the Func! - Moving Toward Functional Programming

In the last year I have heard/read where Uncle Bob has been talking about discovering the twenty-six-year-old treasure of a book The Structure and Interpretation of Computer Programs. It happened enough times that I decided to check it out. The book is available free of charge here. The language used in the book is Scheme, an implementation of Lisp. I hadn't written or read any Lisp since college, but even then I really liked it.

At the time I had no idea of the concepts of functional vs. imperative programming. We didn't talk about immutability; I just thought it was interesting that you had to use recursion where you would normally have used a loop.

About fifteen years later I found myself learning XSLT. While here we do have the <xsl:for-each> element, I once again discovered that if I wanted to count from one to ten, I had to use recursion. That didn't throw me so badly, but what did throw me was the <xsl:variable> element. Maybe it was just the name and my own notions of how a variable is used. I thought I should be able to do this:

<xsl:variable name="x" select="1" />
<xsl:variable name="x" select="$x + 1" />

But nope. Once x is defined, that's it. No one was there at that moment to offer me the term immutability, but there it was. Little did I know that a few years later, linear processing speed would be maxed out, having bumped its head on the laws of physics, and this concept would give traction to something of a programming revolution in pursuit of scaling via concurrency.

After a decade or so of studying Gang of Four and Fowler, honing our OO skills, now .NET geeks are delving into F#, JVM nerds are digging into Clojure and Scala, and Erlang and Haskell are finding their way out of the classroom and into the business world. Even though web developers have been writing client-side code in JavaScript for the last fifteen years or so, most of us never noticed what great functional capabilities it had until recently when jQuery showed us what kind of magic a more functional approach had to offer. And why should all that magic be confined to the client side? Along comes node.js, bringing all that non-blocking functional goodness to the server, or wherever you might need it.

Anders Hejlsberg, king of Turbo Pascal, Delphi, and C#, has been talking lately about the future of programming languages and how important functional capabilities will be to keep them moving forward. This video is a little over an hour long, and it's all good, but if you don't have that much time, at least watch his overview of what functional programming is, which starts about 21 minutes in. It is excellent. He says so much with the simple phrase, "more like math." He also says some interesting things about "islands of functional purity" in the context of more conventional data-oriented (mutable-state) applications.

So what is my point here? Simply that there a lot of us who might just be starting to feel like we've made the transition from procedural to object-oriented programming, and we need to be aware of this functional movement and do what we can to embrace it and adapt to it. It's a very different way to think about solving problems, and it's a lot of fun. Onward and upward! There's more to life than inheritance.

Wednesday, September 15, 2010

Breaking the Ice with IronRuby

I'm a Windows guy and a Ruby guy, so of course I was disappointed at the recent developments around Jimmy Schementi's job change and what Microsoft is doing with the IronRuby team. I want to believe what some other folks are saying about the reports of IronRuby's death being greatly exaggerated, but it still doesn't look too good to me. A few days after this buzz started going around, I started thinking about what I'm doing about it. I'm using Ruby on Windows quite a bit, and there are not a lot of us, but I'm doing all my Windows work on MRI (Matz' Ruby Interpreter). I have IronRuby installed, but I don't use it for anything but an occasional experiment.

If I really care about the future of IronRuby, I need to be a little more supportive, maybe even (dare I say it?) get involved. I recently discovered the new Iron Languages Podcast and learned that there are three (count them) IronRuby MVPs. I honestly didn't know there were any. Anyway, I got inspired to start trying to work IronRuby into my life somehow.

I am a little concerned about how compelling most not-already-Ruby-infected .NET developers will find IronRuby without any Visual Studio tooling. That makes it more of a .NET for Ruby folks than a Ruby for .NET folks. So I thought I would try using it for some of my normal Ruby stuff, the same way I might spike JRuby or Rubinius. Can I do some simple BDD with VIM, rspec, and autospec with Growl?

The first thing I did was to clean up my installation. If you don't have other versions of Ruby installed, the stock IronRuby install works just fine. They've replaced the commands ruby and gem with ir and igem, but if you install autotest or rspec or any other gem that you might call directly from the command line, you can run into conflicts if your MRI gems and your IronRuby gems are both in your path at the same time. I manually removed IronRuby from my path and added it to my pik list. Now I just use pik to choose IronRuby, and it handles the path changes for me.

By the way, if you have IronRuby installed in a directory with any spaces in the full path, it's going to cause you some trouble down the road with certain gems, so do yourself a favor and put it somewhere other than under C:\Program Files\.

I installed some of my favorite gems rspec, autotest, and autotest-growl, which all installed fine because they don't use native extensions. Then I got started on Uncle Bob's good old prime factors kata. You know what? It pretty much worked! The tests ran fine when triggered by saving files. I had a small problem with autospec's handling of Ctrl-C to force the tests to run on demand. It seemed to handle it twice, which caused it to exit, which was less than helpful. But other than that, it appeared to run just like MRI. Even my Growl notifications worked. Love me some Growl.

Now that I've got IronRuby acting like Ruby, I need to get it acting like .NET. I'll save that for another day. Stay tuned.

Monday, September 13, 2010

Hey, you got Ruby in my JavaScript!

I am a big fan of Ruby's Enumerable module. I has some crazy useful methods that get mixed into every array. I have said before that I wish I could somehow use Ruby's Enumerable module in every other language I use. I don't know about every other language I use, but if we're talking about JavaScript, it's pretty doable.

For example, let's look at the find_all method from Enumerable. According to the documentation, when called on an object enum and passed a block block, it returns an array containing all elements of enum for which block is not false. The following line will give us the integers from 1 to 10 that are divisible by 3:

(1..10).find_all {|i|  i % 3 == 0 }   #=> [3, 6, 9]

It's like a little query engine on your arrays. Don't you wish we could do this in JavaScript? Something like this:

[1, 2, 3, 4, 5, 6, 7, 8, 9, 10].findAll(
  function(x) {
    return x % 3 === 0;
  }
)  //=> [3, 6, 9]

Well, one very nice thing about JavaScript is that we can add methods to any object's prototype, and then any object of that type will have that method on it. In our case, we can add a findAll method to all Array objects. To make this work, all we have to do is execute this first:

Array.prototype.findAll = function(f) {
  var result = [];
  for (var i = 0; i < this.length; i += 1) {
    if (f(this[i])) {
      result.push(this[i]);
    }
  }
  return result;
};

Give it a try. Hey, you got Ruby in my JavaScript! Okay, to be fair, we can't really give Ruby the credit. It's just an example of higher-order function usage, but it enables me to give JavaScript a feature that Ruby has spoiled me with.

Enjoy.

Sunday, September 12, 2010

Douglas Crockford's JavaScript: The Good Parts

After hearing more recommendations than I could ignore, I have just finished reading Douglas Crockford's JavaScript: The Good Parts. Thanks to Jerry Cheung over at whatcodecraves.com for pushing me over the edge. Read it if you haven't already. It is full of ridiculously useful details about the ins and outs of a language that few of us can avoid, but it is also full of something I was not expecting: opinions. Crockford doesn't present a single concept without passing judgment on it. This is good. This is bad. This is awful. They got this right. They got this wrong.

I love this approach. The magic of it is that he not only passes judgment on each aspect of the language, but he ultimately tells us how to use the good parts and avoid the bad parts. This effectively makes the language better. It's not about bashing someone else's work; it's about helping us to make the most of it.

Saturday, September 11, 2010

The Cost of Continuous Improvement

Rails 2.3.9 was released this week. Rails 3.0 is already out, and the main purpose of this release is to smooth out the migration path a bit. This means we're watching for deprecation warnings, right? Okay. I installed it and ran my test suite. I got a whole bunch of this one deprecation warning:

DEPRECATION WARNING: Object#returning has been deprecated in favor of Object#tap.

I have been a fan of Object#returning, but Object#tap was a new one on me. I looked them both up. Here are their sources:





They clearly do exactly the same thing, one with an argument, the other with itself. It's the classic smell of duplication. Now they're fixing it. Their code is getting cleaner, and I have to change my code to stay in line and keep it from breaking in the future.

I have been developing software on the Microsoft stack for twenty years now. That is more than enough time for me to have based a few opinions about "how things are done" on their example. I have been developing in Ruby and in Rails for about three or four years now, and I see some very different ideas played out about "how things are done."

Among other things, I see two polar opposite approaches to breaking changes in tool sets, be they language or framework. Microsoft has always done everything they could to avoid introducing any breaking changes into their tools. This is a good thing, right? Who would not want to know that the code they wrote today will still work after they install the next update? In twenty years as a 'softie, the only major breaking changes I can remember being introduced came with the move from VB6 to .NET in early 2002, enough VB developers made enough noise that Microsoft went back and retroactively added backward compatibility with many legacy VB language constructs that had been removed in the 1.0 release.

On the other hand, new versions of Ruby and Rails often have some breaking changes. Not only do changes often break things, but they come much more frequently than they do in the Microsoft world. The standard update routine here is to read the release notes to see what to look for, install the update, run the test suite, see what broke, and get to making changes to fix things. This is a bad thing, right? Or is it?

Take a step back and look at the cumulative effect of these two approaches. In the Microsoft world, we have more of a sense of stability going forward. In the Ruby world, the tools have more freedom to change. If Microsoft allows any mistakes, bad design decisions, dangerous features, etc., out into the field and into the hands of developers, it is most likely going to be years before they can fix it, if they ever can. The problem might be there for the life of the platform. In the Ruby world, the same types of mistakes get made, but fixing them will usually involve a deprecation in one release and then a fix in the next.

So why can world afford to change things and the other not? Is it because they just don't care? Is it because there is no corporate entity afraid of making everyone mad enough to abandon their platform? Ruby and Rails are clearly thriving in this world of breaking changes. How? What is the difference? The difference is in the tests.

They have created an ecosystem that is dependent on test coverage. If I have a good test suite, I know where my breaks are as soon as they occur, whether they have come from a library update or from changes I've made myself. Change is no longer so dangerous. We can also have a sense of stability here, but it comes from testing rather than from the lack of movement. If I am not in the habit of keeping good test coverage, it is going to be difficult for me to survive in this culture. So Darwin keeps us test-covered and change-tolerant.

When change is no longer something to be avoided, we gain a new freedom to improve things. The Boy Scout rule of leaving code in better shape that you found it has long-term effects. Things get better. This is what refactoring is all about, right? Don't we spend more time maintaining a system than building it? I think most systems get built way before they get good. And if they can't be refactored, molded, tweaked, rearranged, nurtured, changed; they never will get good.

I think it is the cumulative effect of the freedom to improve things that has brought Ruby and Rails to place they are today. I find them thoroughly enjoyable tools to use. They're not perfect, but they're better than they were last year, and they're not as good as they will be next year. If that means I have to take a little time with each release to roll with the changes, that is a small price to pay.

pik: RVM for Windows

Of course RVM is pretty much ubiquitous for managing multiple versions of Ruby on Linux and OS X, but what about all you Ruby folks on Windows? I just want to make sure you know about pik. Installing new versions is not quite as smooth an experience as RVM, but it's not bad, and the switching between versions experience is actually better than RVM, because it fuzzy-matches the version with what you type.


So if you've been itching to play around with Ruby 1.9.2 but you're still working on an app in 1.8.7, pik is your answer. And if you're still clinging to your Visual Studio and have not yet taken the plunge to see what all this Ruby buzz is about, by all means head over to rubyinstaller.org right now and get started. You won't regret it. The more Windows users on Ruby, the better.

Free Stuff

Music City CircuitI really like free stuff. I recently discovered the Music City Circuit, a free bus service that runs two routes through downtown Nashville. Yes, the routes are limited, but hey, it's free. It opens up all kinds of options when you're trying to spend less on your parking than on your dinner.