Skip to main content

Mobomo webinars-now on demand! | learn more.

It's a very common practice in Ruby to use Module mixins to enhance the functionality of a class. In fact, one of the most powerful and useful features of the Ruby language is that it is so easy to do so. Great stuff all around.

Another common pattern, however, is to want to provide some include-time configuration when the module is mixed in. Let's imagine I'm writing an extension for ActiveRecord that creates a slug based on some field. What I want in the end might be something that looks like this:

class MyModel < ActiveRecord::Base   slug :name end 

OK, that seems easy enough. How can I implement that? Well, there are a number of different ways. I could just re-open ActiveRecord::Base:

module ActiveRecord   class Base     def self.slug(field)       # do some things here...     end   end end 

Ruby gives us a lot of rope to implement things in different ways. In this case, it's given us enough to hang ourselves. Opening existing classes like this is a bit dangerous, escpecially in an environment like Rails that has so much lazy loading of classes. How can we improve on this? Well, we could make our slug extension into a module:

module Sluggable   def slug(field)     include Sluggable::ActivatedInstanceMethods     # do something with field   end    module ActivatedInstanceMethods     # some code here...   end end  ActiveRecord::Base.extend Sluggable 

This is a little bit better: we are encapsulating the behavior of Sluggable into its own entity rather than polluting an existing class. This still doesn't seem quite right, though: why does every ActiveRecord class need to know about this slug method? Ideally I would only modify the behavior of classes that I actually want to implement the slug behavior, not all ActiveRecord classes.

So what's next? Well, we can do something like this instead:

class MyModel < ActiveRecord::Base   extend Sluggable    slug :field_name end 

This works fine, but it's a little distasteful looking. Why do I need to have two lines of code in my model? Doesn't that defeat some of the purpose of trying to abstract functionality out into its own little world?

Introducing Imbue

The more I thought about this problem, the more I felt like there needed to be a more primitive implementation of this pattern within Ruby. I want to be able to include a module into a class with the knowledge that the module is going to, in some ways, reconfigure my class with optional arguments that I pass. Of course, some would argue that if we allowed the include keyword to do this we would be stripping modules of their dignity. So can we come up with perhaps a different keyword that represents the pattern of "transformative mixin"? I propose imbue. Imbue means "Inspire or permeate with a feeling or quality" which sounds pretty similar to what we're trying to accomplish here.

So how can we go about implementing this concept of "imbue"? Here's a dead-simple version of it:

class Module   def imbue(mod, *args)     result = include(mod)     mod.imbued(self, *args) if mod.respond_to?(:imbued)     result   end end 

Well, that's pretty concise, huh? Surely 7 lines of code can't have that much effect on our problem area, can they? Here's what these 7 lines of code let us do:

module Sluggable   def self.imbued(base, source, options = {})     base.extend ClassMethods     base.send :include, InstanceMethods     # additional transformation here   end    module ClassMethods     # relevant class methods   end    module InstanceMethods     # activated instance methods   end end  class MyModel < ActiveRecord::Base   imbue Sluggable, :field_name end 

In the end, I really like the way this looks. It's less cryptic than the hide-the-ball class method because it gives us the fully qualified module. It's more concise than the include-and-class-method strategy because it lets us pass options along with the module we want to include. It's more Rubyish than the factory pattern, giving us a simple standard method call instead of using metaprogramming to generate an anonymous module. To sum up, I really like it!

This pattern is so commonly used in Ruby that I would almost argue that something akin to imbue should be a part of the language itself. It gives you a great way to encapsulate transformative behaviors that can then be applied generically. What do you think about imbue? Would you like to see it as a gem or even part of Ruby itself? Let me know in the comments.

Categories
Author

One of my favorite aspects of Ruby is that just about everything is an object, even Class and Module. The ability to instantiate "anonymous" classes and modules can give you a great deal of power and help you out in situations where you otherwise might not have a clean solution.

What do Anonymous Things Look Like?

Anonymous classes and modules are just like other classes and modules but a little different. This can be seen best by example:

c = Class.new # => #<Class:0x000001009dea80> c.name        # => nil (normally this would be something like "Object") c.class       # => Class c.new         # => #<#<Class:0x000001009dea80>:0x0000010096b3c8> 

So what we did is create an instance of the Class class which is itself a class. Notice that by calling c.new I was able to create an instance of my anonymous class just like I would with a normal one. Now let's take a look at anonymous modules:

m = Module.new  c = Class.new c.send :include, m  c.ancestors # => [#<Class:0x000001028640b8>, #<Module:0x00000100849030>, Object, Kernel, BasicObject] 

Once you've created an anonymous module you can include it in a class just like you would with a normal module. Anonymous classes and modules behave just like the real thing, they just don't have constant names attached to them!

So now that you can recognize an anonymous class or module, how can you use them in the real world? Is this just a bunch of theoretical nonsense with no practical application?

Keep Specs Fresh With Anonymous Classes

Sometimes it can be difficult to test a module since you don't want to test it tied to a specific implementation. Let's say we have a module that looks like this:

module MyModule   def something?     true    end end 

How can we test it? One solution is to create a "test" class like this:

class MyModuleTest   include MyModule end  it 'should be something' do   MyModuleTest.new.should be_something end 

But there's a problem with that approach: since your test class lives outside the scope of your test, it doesn't get torn down at the end of each test run without manual intervention. So what can we do instead?

let(:fresh_class) do   Class.new{ include MyModule } end  it 'should do something' do   fresh_class.new.should be_something end 

Now we get a pristine class each and every test that we know doesn't have any state baggage. Anonymous classes to the rescue!

Anonymous Modules for Composite Functionality

So now we've seen an example of when anonymous classes can be used, but what about anonymous modules? Well here I'll show you a real piece of code from Grape, my framework for building REST-like APIs.

Grape allows users to define helper methods similar to Sinatra. When a user creates an endpoint for the API, Grape gives them access to all helper methods that were defined in the current context as well as parent contexts. We need a way to store all of the helpers that a user provides and be able to instantly create a module that is a composite of all of the helpers that were defined thus far. It sounds complicated (and it is, a little bit), but here's how we might use it in the end:

class MyAPI < Grape::API   helpers do     def user?; false end   end    namespace :authenticated do     helpers do       def user?; true end     end      get '/' do       user? # => true     end   end    get '/' do     user? # => false   end end 

Now here's the behind-the-scenes code that makes this possible (with some added comments to explain the usage of anonymous modules):

def helpers(mod = nil, &block)   # If a block is given or a argument passed the user is   # *setting* helpers.   if block_given? || mod     # Grab the existing anonymous module for this context     # or create a new one if there isn't one yet.     mod ||= settings.peek[:helpers] || Module.new      # Evaluate the block passed to the `helpers` method     # in the context of our anonymous module.     mod.class_eval &block if block_given?      set(:helpers, mod)    # If no block or argument is passed the user is   # *retrieving* helpers.   else     # Create a fresh anonymous module that isn't     # tied to any existing context.     mod = Module.new     settings.stack.each do |s|       # For each context in our stack, include the       # defined helpers in order.       mod.send :include, s[:helpers] if s[:helpers]     end     mod   end end 

Hopefully that isn't too dense to make sense, but anonymous modules allow us to create on-the-fly mixins that are included in the endpoint code to give you access to the helpers you've defined.

Go Forth And Be Nameless

These are just a few examples of the usefulness of anonymous classes and modules; they are powerful tools that can give you more flexibility in designing and testing your Ruby code. Do you have a cool use case for anonymous classes and/or modules? Let me know in the comments!

Image Credit: Astrojunta on Wikipedia

Categories
Author

I've never read Amazon's Dynamo paper. I've also never had the opportunity to work with Cassandra or SimpleDB, but when Amazon announced DynamoDB I thought it was time to take a little bit of time to learn what it was just in case it was super-useful. I thought I'd share a few of my findings.

Disclaimer: I'm completely new to this style of NoSQL system and may well in fact be misusing it in places. Feel free to give me some free education if I'm doing something horrendous below.

What is DynamoDB?

DynamoDB is a NoSQL database hosted by Amazon and intended to give Amazon the burden of scaling your data until it goes to 11 (or 11,000,000). I've seen a number of posts describing how DynamoDB works, but not really much about what it is.

DynamoDB is (mostly) an enhanced key-value store with a few features that bring it beyond a simple KVS. Those features include:

  1. You do actually dump a variety of typed attributes into the system rather than just raw string data, so you can have numbers, strings, sets of numbers, and sets of strings.
  2. You can provide a "range key" which essentially gives you a single indexed field upon which you can perform queries. This is likely to be something like a timestamp or other "ordering" key from the use cases I've figured out.
  3. You can perform atomic increment/decrement and set add/remove operations on rows in your DynamoDB tables.
  4. Each table has a defined read and write throughput, so you can literally just tell Amazon how much scale you need and it takes care of the rest behind the scenes.

One thing that surprised me (perhaps I was being dense) is that if you create a table that has both a hash and a range key, you can have multiple values with the same hash key. In fact, you can only query values that share a hash key, so this is the intended use case.

Getting Started (with Ruby)

Unfortunately, Ruby is not one of the listed example languages in the DynamoDB documentation. Fortunately, Amazon does support DynamoDB through its aws-sdk gem and it's relatively straightforward to use.

First, you'll need to sign up for DynamoDB through the AWS console. Luckily there is a free usage tier that gives you 100MB, 10 reads/second and 5 writes/second.

Once you've done that, you'll need to fetch your AWS Security Credentials so that you can connect to DynamoDB from Ruby. Got 'em? Good.

I set out to build a very basic Twitter-like system as a proof-of-concept of DynamoDB. For that, I wanted to have two tables: tweets and users. I managed my schema like this:

require "aws" AWS.config(   access_key_id: ENV["AWS_KEY"],   secret_access_key: ENV["AWS_SECRET"] )  DB = AWS::DynamoDB.new TABLES = {}  {   "tweets" => {     hash_key: {timeline_id: :string},      range_key: {created_at: :number}   },   "users" => {     hash_key: {id: :string}   } }.each_pair do |table_name, schema|   begin     TABLES[table_name] = DB.tables[table_name].load_schema   rescue AWS::DynamoDB::Errors::ResourceNotFoundException     table = DB.tables.create(table_name, 10, 5, schema)     print "Creating table #{table_name}..."     sleep 1 while table.status == :creating     print "done!n"     TABLES[table_name] = table.load_schema   end end 

This bit of code contains the schema information for each table as a hash (you can specify a hash key and, optionally, a range key when creating a table). It then checks to see if each table exists and creates it if not (in this example using 10 reads/second and 5 writes/second). Creating tables in DynamoDB is a non-trivial operation and may take as long as a minute (probably a good deal more with a heavy throughput). Once it's created the tables it loads the schema for them (required for later operations) and stores the resulting object reference in a TABLES constant.

Next I needed to learn how to actually manipulate data in the tables, so I created some barebones models to accomplish my needs. You can see the full models file in this Gist but here are some of the highlights:

# Create a user with id "username" TABLES["users"].items.create(id: "username")  # Dump a hash of attributes for all users TABLES["users"].items.each{|i| puts i.attributes.to_h }  # Fetch a specific user user1 = TABLES["users"].items.at("username") user2 = TABLES["users"].items.at("username2")  # Follow another user user1.attributes.add(following: ["username2"]) user2.attributes.add(followers: ["username"])  # Post a tweet now = Time.now user1.attributes["followers"].each do |follower|   TABLES["tweets"].items.create(     timeline_id: follower.attributes["id"],      created_at: now.to_i,     text: "This is the tweet text."   ) end  # Retrieve 24 hours of tweets for a user's timeline TABLES["tweets"].items.query(   hash_value: "username",   range_greater_than: 1.days.ago.to_i ) 

Hopefully reading the code above gives you some idea of the simple operations for creating records, performing an atomic operation, and querying by hash key or range.

What Next?

DynamoDB is a bit of a puzzle to me. It seems to me that it would mostly be useful for applications that have already pushed the limits of more flexible data solutions like MongoDB (or even SQL) and need intense levels of data throughput with reliable redundancy. I don't think you would likely start your application out architected to use DynamoDB, but at least now I've explored it enough to add it to my toolbelt if I come across a situation where its unique blend of features makes sense. Are you using DynamoDB or looking at it for a project? If so, I'd be curious to know your use case.

Categories
Author

Unit tests should pass when run in random order. But for an existing legacy project certain tests might depend on the execution order. One test might run perfectly fine by itself, but fail miserably when run after another test. Rather than running different combinations manually, RSpec 2.8 has the option to run specs in random order with the --order random flag. But even with this it can be hard to determine which specific test is causing the dependency. For example:

     rspec spec/controllers  # succeeds     rspec spec/lib/my_lib_spec.rb  # succeeds     rspec spec/controllers spec/lib/my_lib_spec.rb  # fails 

In this scenario you know that one of the spec files in spec/controllers is not jiving with your lib spec, but if you have hundreds of spec files, it's hard to tell which. Never fear! There's a Ruby one-liner for that:

     ls spec/controllers/*.rb | ruby -pe '$_=`rspec #{$_} spec/lib/my_lib_spec.rb`' 

Let's break this command down into its components:

     ls spec/controllers/*.rb 

gives you a list of spec files to run alongside your lib spec

     ruby -pe 

'e' for execute, and 'p' means wrap the code in a loop and assign each line of STDIN to $_. We're piping in STDIN from the ls command.

     $_=`rspec #{$_} spec/lib/my_lib_spec.rb 

The 'p' flag also prints out the value of $_ at the end of each loop. So we assign the output of running rspec with the 2 files (one from ls alongside my_lib_spec).

My bash buddies would wag their fingers at me for using a ruby one-liner here, but it's a familiar syntax and it's easier for me than remembering other shell commands and regex flags. If there's something another unix program is better at processing, then I can then take the output of the ruby one-liner and pipe it into another command. It's a very simple and versatile way to munge on text.

Categories
Author

Metro areas generally have really active user groups where Rails_Awesome_Lord presents regularly, famous hackers drop in to give presentations, and the Rails Elite throw smashing parties and drinkups after each meeting. But not all developers live in (or near) metro areas and can partake in such festivities. If you're among the rural band of outlaw programmers, this post is for you.

Portland, Maine isn't a tech hotbed by any means and when Adam Bair and I took over our small Ruby User Group after the last coordinator moved to NYC we were pretty sure that garnering attendance and participation would prove difficult in our area. However, to our surprise it wasn't hard at all. In fact, we found that Maine had a scattered yet hardcore group of programmers, each looking to find other programmers. We meet monthly and the size of our group fluctuates from 6-15 people depending on the month. We do very little outreach aside from Twitter announcements and messages to the Google Group. So what's the secret? How do you call the mavericks out of their programming caves and get them to join you?

Here are some tips for getting your own user group started, even if your village is but wee and agrarian:

  1. Location - We host the user group right at our house. Since we already have all the necessary hardware (laptops, HDMI cables for hooking laptops up to the TV, seating, etc) it's convenient to just host everyone at our place once a month. It's much easier than lugging a bunch of equipment around, negotiating with companies to use their space, setting up equipment and so forth). Additionally, people tend to feel more relaxed in a house-setting which leads to more in-depth conversations, knowledge-sharing and time spent together.
  2. Money - If you're hosting the user group in your own space (or in another free/low-cost charge space) you won't need a lot of financial support. It's worth asking your own employer if they would be willing to sponsor the event with pizza and drinks in return for handing out a few stickers and mentioning their support. Intridea sponsors our small group each month (along with several others) with pizza and drinks! If your employer can't help you out chances are that another member's employer might be willing to help you in exchange for some promotion.
  3. Content - Herein lies the challenge that most user group coordinators are faced with! It can be cumbersome to come up with good presentations every month. Here are a few ideas:
  • Presentations are not necessary for a rural user group. In fact, many of your members might dread public speaking and would probably appreciate a more casual format to the meetings until you all get to know each other better. Instead of official presentations, consider volunteering to show off some code you've been working on to the group. Afterward, it's likely that someone else will feel inclined to show some code as well. If the members can trust each other to be low-key (who wants to feel like they're going to work at the office when they go to a user group?) then everyone will end up sharing more information in the long run.
  • If you do offer a presentation, keep in mind that it doesn't need to last 60 minutes, nor does it need to be delivered to the group as though you were presenting at RailsConf (unless of course, you are presenting at RailsConf and need somewhere to practice!). If you just want to run through some new code you've been working on and that takes 20 minutes, that's ok. If you want to put together a presentation on CoffeeScript, keep it light and engaging. Programmers just enjoy getting together to talk shop with each other. If we don't have anything officially prepared for the group then we'll open up the floor to people that want to show off some code.
  • Ask other members to give presentations - if you hear that one of your members has been learning Backbone.js for a new project at work then ask him to present at the next meeting.
  • It's beneficial to maintain a good relationship with other local/nearby user groups. Often our Portland members will caravan down to the New Hampshire and Boston Ruby User Groups if we don't have any concrete plans for our own group that month. This way, our members are still getting together and talking about programming.
  • Twitter is your friend - Make sure you follow local (and local-ish) devs. If you catch wind that one of them is coming close to your town exercise those social skills and reach out to them - invite them to speak at your user group while they're in town! We've been fortunate to have generous programmers from the New Hampshire and Boston area who have travelled to give presentations to our group in Portland.
  • Burn Out - If you start to feel burnt out, rather than let the user group die off, reach out to another regular member and ask for some help. There's no shame in taking a sabbatical!

With all of those tips in mind, there's also one more important thing to remember: a user group is a community. It takes a little bit of time and effort to build it, but once you've done that work it comes with all the benefits of any other close-knit community. If the community is cared for then it can become a tremendous resource to all of its members for anything from code advice, job hunting and mentoring to board game partners and craft beer enthusiasts.

If your area is lacking a user group, step up and host one; people will be thankful that you did! A house, a laptop, and a few conversations on Twitter is all you really need to get started. And maybe a year from now you'll be able to look around you and see a strong community of programmers gathered together, sharing stories, strategies, and experiences.

Categories
Author

As a new developer to Ruby you might wonder how certain methods seem to be magically available without being strictly defined. Rails's dynamic finders (e.g. find_by_name) are one example of this kind of magic. It's very simple to implement magic such as this in Ruby, but it's also easy to implement things in a way that doesn't entirely mesh with standard Ruby object expectations.

Your Friend method_missing

The way that many magic methods are implemented is by overriding method_missing. This special method in Ruby is automatically called by the interpreter whenever a method is called that cannot be found. The default behavior of method_missing is to raise a NoMethodError letting the user know that the method that was called does not exist. However, by overriding this behavior we can allow the user to call methods that aren't strictly defined but rather programatically determined at runtime. Let's look at a simple example:

     class Nullifier       def method_missing(*args)         nil       end     end      nullifier = Nullifier.new     nullifier.some_method     # => nil     nullifier.foo(:bar, :baz) # => nil 

Here we simply told method_missing to immediately return nil, regardless of the method name or arguments passed. This essentially means that, for this class, any method call that is not defined on Object (the default superclass for new classes) will return nil.

While this example is certainly interesting, it doesn't necessarily give us more use in the real world. Let's take another example that actually does something useful. Let's make a hash that allows us to access its keys by making method calls:

     class SuperHash  'def'     h.something_else # => NoMethodError 

This behavior gives us something pretty simple yet powerful: we have manipulated the foundation of the class to give us runtime methods. There's a problem, though: using method_missing alone is only half the story.

Quack Check With respond_to?

In Ruby, you can call respond_to? with a symbol method name on any object and it should tell you whether or not that method exists on the object in question. This is part of what makes Ruby's duck-typing work so well. So in our example, we also want to be able to know if a method is there using respond_to?. So let's add a new override for the respond_to? method of our example above:

     class SuperHash < Hash       def respond_to?(symbol, include_private=false)         return true if key?(symbol.to_s)         super       end     end 

Well, that was easy enough. Now our SuperHash will return hash keys based on method_missing and even tell you if the method is there with respond_to?. But there's still one more thing we can do to clean things up a bit: notice how we have repeated functionality in that we check key? in both methods? Now that we have a respond_to? we can use that as a guard for method_missing to make it more confident:

     class SuperHash < Hash       def method_missing(method_name, *args)         return super unless respond_to?(method_name)         self[method_name].to_s       end     end 

Wait, that can't be right, can it? Can we just assume that we can call the key like that? Of course! We already know that no existing method was called if method_missing is activated. That means that if respond_to? is true but no existing method was called, there must be a key in our hash that caused respond_to? to return true. Therefore we can confidently assume that the key exists and simply return it, removing the conditional and cleaning up the method_missing substantially.

Now that you know how method_missing and respond_to? can work together to add functionality to an object at runtime, you have a powerful new tool in your metaprogramming arsenal. Enjoy it!

Categories
Author

Intridea Partner and open source crusader, Michael Bleigh, will be back in his hometown of Kansas City this week, presenting "Rails is the new Rails" at Ruby Midwest.

The sweeping changes brought on by Rails 3 and 3.1 haven’t just made our existing development patterns easier, they have opened up the ability for us to build new patterns that accomplish more in a more beautiful and efficient way. In this session you will see how thinking about new features in a different light can lead to real innovation for your development practices. Examples include baking routing constraints into your models, application composition with Rack, truly modular design with the asset pipeline, and more.

Coming off a huge month of open source development on OmniAuth (version 1.0.0 was just released this morning), and working onsite for a huge client in NYC on a cutting-edge Rails app, Michael is excited to share his recent insights on Rails 3. Be sure to catch his presentation on Friday, November 4th at 10:30 am, just after the morning break. Follow Michael on Twitter for updates from this (and upcoming) conferences!

Categories
Author

Last month Intridea sponsored RailsCamp New England - a Rails retreat in the western mountains of Maine. Adam and I attended the event for the second time (this was the fourth U.S. Rails Camp, and the second one in Maine) along with 38 other Ruby and Rails developers. On a rainy Friday evening we all settled in the cozy Maine house for a long weekend of geekery.

Ben Askins started the RailsCamp movement in Australia in 2007 and with the help of Pat Allan's enthusiasm, RailsCamp took off! In 2009 Pat and Brian Cardarella worked together to bring the tradition to the New England area. In the last five years RailsCamps have been organized throughout much of Europe, the UK, Australia and the eastern side of the U.S.

The spirit of RailsCamp is simple - bring Rails devs to the backcountry, isolate them in a house for a long weekend, and watch what happens. There is a local network for sharing resources, and a local server with a mirror of RubyGems. If the idea of limited access to the internet and modern amenities causes you alarm, do not fear - it's not a Luddite conversion retreat. Though the setup seems primitive it is actually quite intimate and inspiring. The isolation removes most non-programming-related distractions, and the relaxed environment is conducive to epically long hack sessions. So what does happen when you throw 40 programmers in a house together without internet?

We hack. We collaborate on projects. We share information and tools. We get feedback on our code. The veterans share the experience they've gained from decades of programming. The shy ones hack in quiet corners and observe and absorb the information that's being shared. We share meals, enjoy evening beverages together, fight for our lives in fireside games of Werewolf, swim under a sea of stars in a cool lake, and flex our gamer cred in fast-paced rounds of Urban Terror.

This year we even produced something other than code! Pascal Rettig brought along a Thing-O-Matic, a personal fabrication machine, which we used to make 3D plastic RailsCamp logos. As we all geeked out over this I marveled at how the reaction to creating a tangible item was as thrilling for us as creating applications. I thought more deeply about the similarities between manufacturing and programming, so look for that post in the near future!

As the Rails community continues to expand and evolve rapidly, its conferences have become increasingly monolithic. While the larger conferences still have a tremendous amount of value, it's nice to have events like RailsCamp, where there still exists a profound intimacy among coders. For a passionate Rubyist it doesn't get much better than a laptop, a local gem repository, good food, games, and a cabin in the woods with a group of other passionate Rubyists. It's the stuff geek summers are made of. You can view additional photos on our Flickr page!

Categories
Author

Of all of the new tools that I've picked up using for development in the past six months, there is one that has come to stand above the others for its nearly universal utility. That tool is Guard.

Guard is a RubyGem but don't let that fool you into thinking it's only useful for Ruby projects. Guard is essentially an autotest for everything. It provides a general purpose set of tools for watching when files are changed in your project and taking action based on it. You can use it to do just about anything, but common uses will include:

  • Re-running automated tests after a file changes.
  • Automatically compiling scripts or assets for a project (e.g. minification).
  • Installing new dependencies that may be added to the project.

With a little creativity and a slight bit of Ruby coding, though, you can make your entire project's workflow run smoother and faster. It's like having a telepathic robot buddy who just goes around doing whatever you were about to do next without having to be told (except the first time).

Getting Started With Guard

Guard requires a basic Ruby setup. Once you have Ruby and RubyGems installed, simply run:

gem install guard 

This will get you started. If you want to make it easier for others to run your guards as well, you should also install Bundler to encapsulate the different guard gems you'll be using:

gem install bundler 

Once you have these installed, in the root of your project run:

guard init 

This will initialize a Guardfile in the project root that will be telling Guard what to do going forward. From here, you will want to install some of the Guard extension gems that let you quickly create automation for your project. Some of my favorites:

  • guard-rspec: Automatically run RSpec tests based on easy-to-customize patterns. I use this on almost every Ruby project these days.
  • guard-coffeescript: Compile Coffeescript into Javascript lickety-split. Even though Coffeescript has its own automatic build command with the -w option, I prefer Guard because it lets you define the configuration once and, in addition, run a single process for all of your project's automation.
  • guard-process: This is the guard for anything they haven't made a guard for yet. Using this you can quickly and easily run shell commands as soon as files change, giving you the ability to do almost anything.
  • guard-sass: Never write vanilla CSS again. Using Guard SASS you can automatically compile SASS giving you the full power of mixins, variables, and more for all your styles.

There's a full list of guards that include all kinds of magic (there's even guard-livereload that can automatically refresh your browser whenever you make a change to a project), and it's dead simple to create new Guard libraries if what you want isn't available (or you can just use guard-process).

Standing Guard

For any of the Guard gems you install, you can add them to your Guardfile by running:

guard init guardname 

Where guardname might be rspec or coffeescript, etc. That will fill your Guardfile with a basic implementation of the given guard and is usually enough for you to tweak the settings to your liking without further documentation.

There's a great example of using Guard for a big Rails project, but I'm not just using it for Ruby. I've used Guard on jQuery plugins, Node.js projects, even static websites that I've been building (more on that a little later).

To make it easier for others to jump into your project with Guard, it also helps to use Bundler to maintain a Gemfile that points to the various guards you're using for the specific project. Just run bundle init to get Bundler up and running then edit the file to look something like this:

source 'http://rubygems.org'  gem 'guard' gem 'guard-coffeescript' gem 'guard-process' 

Then run bundle install. Once your gems are installed and you've set up your Guardfile, just run:

bundle exec guard 

Guard will start up right away and your project now has some smooth automation action. Guard will even reload itself if you modify the Guardfile, so feel free to tweak as you go!

Guard in the Real World

I'm going to post just a couple examples of Guardfiles I've been using in my projects recently to give you an idea of its versatility.

Guarding a jQuery Plugin

Here's the Guardfile for Sketch.js, a jQuery plugin that I just released:

# Automatically build the source Coffeescript into the lib directory guard 'coffeescript', :input => 'src', :output => 'lib', :bare => true # Also automatically build the test Coffeescripts guard 'coffeescript', :input => 'test', :output => 'test', :bare => true  # Run Docco  guard 'process', :name => 'Docco', :command => 'docco src/sketch.coffee' do   watch %r{src/.+.coffee} end  # Copy the newly created lib file for minification. guard 'process', :name => 'Copy to min', :command => 'cp lib/sketch.js lib/sketch.min.js' do   watch %r{lib/sketch.js} end  # Use uglify.js to minify the Javascript for maximum smallness guard 'uglify', :destination_file => "lib/sketch.min.js" do   watch (%r{lib/sketch.min.js}) end 

This enabled my workflow to be instantaneous: I could immediately look at my work whether it was in my examples, my tests, or my documentation. Everything was immediately built and I never had to slow myself down with run and refresh cycles.

Guarding a Node.js Project

I've probably only scratched the surface here, but a simple Node.js project that I'm currently working on has this for a Guardfile:

guard 'coffeescript', :input => 'src', :output => '.', :bare => true  guard 'process', :name => 'NPM', :command => 'npm install' do   watch %r{package.json} end 

Notice that using guard-process I'm automatically installing new dependencies that may arise when the package.json file is altered.

Guarding a Static Website

I've come to really appreciate both Coffeescript and SASS as worthwhile abstractions, so even if I'm building something that's vanilla HTML I might have a Guardfile like this:

guard 'sass', :input => 'sass', :output => 'css' guard 'coffeescript', :input => 'coffeescripts', :output => 'javascripts' 

These are all basic examples, but that (to me) is the point: Guard is so simple to use and basic that you can drop it in every project you build. I've yet to run into something that I don't want to use Guard on.

Tip of the Iceberg

I've been expanding my usage of Guard into, well, everything that I'm working on. Thus far it's included Ruby, Javascript, and static HTML projects, but if I move on to other things Guard will be coming with me. For instance, I'd love to build a Guard to automatically recompile and run an Android application whenever the XML views change. The possibilities are limitless.

If you're not using Guard, give it a try on one of your current projects. I think you'll quickly find immense satisfaction in being able to simply cd into the project directory, run guard, and know that you are completely ready to roll. I'd like to see a Guardfile in every open source project I fork, every client project I clone...Guard is so useful that I simply want to be using it all the time. And that is the mark of a great tool.

Categories
Author

A lot has been made in the talkosphere recently about the brewing "multi-Ruby version manager" war, namely RVM vs newcomer rbenv. I'm not here to discuss the relative merits of either software solution, mostly because I take things pretty simple and straightforward in command-line world and I've never run into problems with RVM. What I do think this little fracas displays, though, is a common thread in the Ruby community of having big, blown-up controversies when new things come along. In some ways, I think that such drama is one of the unique features of the Ruby community that make it so vibrant. It's also a feature of the community that can lead to community casualties.

RVM vs. rbenv, Test::Unit vs. RSpec, HAML vs. ERB, Rails vs. Merb, Coffeescript vs. Javascript, Mongrel vs. Thin vs. Passenger vs. Unicorn, Cucumber vs. Steak, and the list goes on. It seems like the Ruby community has a habit of drawing battle lines every month or so. Why do these "fights" come up so frequently in our community? More importantly, what do they mean for the overall health of the community?

Today we're launching a little site called RubyThankful. It's barebones at the moment and open source, but what it represents is hopefully a way to find some positivity in the Ruby community.

Background: Passionate Programmers

I would argue that controversy breaks out on a regular basis in the Ruby community because, more than any other community in which I've participated, Rubyists are singularly driven to use not just good-enough tools but ideal tools. Ruby is a community of chaotic reinvention, a community that will jump off a cliff just to try out a new brand of parachute. It's that passion that draws me to the community, that makes me feel like the things that I do matter. It's also that passion that can cut to the bone.

People are inevitably going to form opinions about what they think is the best in a field of competing libraries/tools/products. This competition in the commercial marketplace is what drives high quality and low prices, and in the open source world it's what drives reinvention and continual progress. If a library isn't pushing its users forward, those users can and will seek out a different library that better meets their needs. This is natural and generally beneficial.

What isn't maybe so beneficial is the "what have you done for me lately" attitude that can come with our pursuit of the perfect development process. It's altogether too easy to write about reasons why "Y is better than X" while forgetting about the fact that before that, X was so much better than nothing at all.

Casualties of Harsh Reality

As I began to write this post I saw Steve Klabnik's We Forget That Open Source is Made of People. It'll be hard not to re-iterate many of his well-reasoned thoughts here, so I want to give him credit for making a point that needs to be made. I was also amazed to come against this controversy just one day after I wrote a post that included the sentence "Harsh words can sometimes be enough to completely dissolve the creator's interest in continuing the project." We've lost amazing members of this community because rather than respecting their contributions we tear them down when something marginally better comes along. This is the dark side of passion.

I like some tools better than others. I've even written blog posts debating the merits of one approach over another and declaring one superior for my purposes. I've been guilty of jumping onto new technologies and giving nary a thought to the old way of doing things. I don't think it's possible to stop this community from being obsessed with the new and different, and I don't think that's what needs to happen. What needs to happen is that our community needs to get better at raising our voice in something other than protest. We need to temper our enthusiasm for the new enough to at least be civil to the hard-working people who created the tools we used until oh-so-recently.

I'm as guilty as anyone of this. Short of trying to say "thanks" in person to the creators and maintainers of the tools I use every day when I see them at conferences, I don't take much time to thank people for the amazing things they've done to make this community what it is. This all sounds sappy and somewhat inefficient, but I think it's a vital piece of maintaining a healthy community.

Ruby Thankful

I almost wrote this just as a blog post to say "hey, let's be more positive and thankful." I was just about to post it when I realized I could do at least a little bit more than that. So I built an almost-nothing-to-it site that can serve as a public forum for the Ruby community's gratitude for those who work hard to make it what it is. Just tweet something with the #rubythankful hashtag and it'll get picked up. Maybe it's someone you're thanking for a library, or their blog post or tutorial that helped you out, maybe it's something else. If you're thankful for the Ruby community and the members of it, let's put some voice out there!

This community has given me a lot in the last four years, and I've done my best to give back. But I haven't always been thankful enough to the individuals who create the things that I use every day. Hopefully RubyThankful is a small way to encourage that to change.

Categories
Author
Subscribe to Ruby