rylwin's blog

coding for results

Simulate Timeouts for Testing Devise/Warden With Capybara

I found myself wanting to test some custom behavior that runs when the user’s session times out. Instead of sleep User.timeout_in, here’s the solution we are using to simulate a user timeout:

1
2
3
4
Warden.on_next_request do |proxy|
  session = proxy.env['rack.session']['warden.user.user.session']
  session['last_request_at'] -= User.timeout_in
end

Think About Your Architecture

TL;DR

Don’t just throw all your code in app/models; don’t just build a Rails app. Have another directory that contains all of your domain-specific logic. Put some thought into how you organize the files in that directory. And there’s a rule to follow: files in this directory must not invoke any Rails-provided methods.

Write an application for your domain, not a Rails app

Don’t get me wrong. Rails is fantastic—I love it! But my past Rails projects all eventually suffer from poor architecture and slow tests. It’s not Rails’ fault, it’s mine. I’ve been leaning too heavily on Rails this whole time. Now I’m finally learning how to build great apps that I enjoy working on long beyond the prototype stage.

The key to building an application with Rails that stands the test of time is to not use Rails as the defining feature of the application. Rails is just a delivery mechanism and a persistence layer. The domain logic should be separate from Rails. (Thanks to Bob Martin for motivating me to confront this this issue.)

At the same time, Rails provides a lot of default goodness out of the box. I do not want a lot of extra code that re-invents anything already provided by Rails. The goal is to achieve an improved architecture without sacrificing the ease provided by Rails.

I’ve split this post into two parts (ignoring this intro). The first is a high-level explanation of the architecture that I’m proposing. In the second section, I’ll show some of the code I’m using to get this all to work.

The Architecture (from 20k ft)

The short version: app/models contains my ActiveRecord models and app/lib contains application / domain-specific logic.

app/models

The classes in app/models should be ActiveRecord classes or classes that extensively use the ActiveRecord API (e.g., query-helper classes). Additionally, code in app/models should know nothing of our domain logic. The classes should be focused only on data retrieval and storage. One way to think of app/models classes is as facades to the ActiveRecord API.

I use ActiveRecord models throughout the application, but I don’t write any code outside of app/models that directly uses an ActiveRecord method. This means that, in code outside of app/models, I only invoke standard field getters/setters and methods that I’ve defined on the models.

This approach is a balancing act. For example, I let scaffolded controllers invoke ActiveRecord methods—I’m not going to spend time updating the default controllers. This goes to the point of not creating extra work / reinventing the wheel. After all, one could have PORO classes to represent each ActiveRecord model to the rest of the application, but achieving this would require a decent amount of work for what may be an almost entirely academic benefit.

app/lib

The app/lib directory contains the business/domain logic. The code in app/lib must have no knowledge of Rails—none at all. app/lib classes (all POROs) can still use the model classes, but they are not allowed to use any ActiveRecord API methods (except the field getters/setters). This means direct invocation of save, create, update_attributes, all querying, etc. is off limits.

By separating the application logic into a separate directory, I’ve found it much easier to make good architectural decisions. A nice benefit of this approach is that it makes it very easy to write fast_specs since I know that code in app/lib does not have dependencies on Rails or the database. All models in app/lib have their specs located in a fast_spec directory.

The classes within app/lib are organized into modules. The modules you define will vary depending on your domain, but I’ll share a few of mine from a project I’m working on:

Integration – interaction with remote data sources

Integration classes are responsible for interacting with remote data sources. These classes have no dependencies on our other classes. The external sources may be remote APIs, uploaded data files, etc.

Render – renders documents in various formats

Classes in the render module are responsible for rendering non-HTML/JSON views (e.g., render a PDF, XLS, etc). It is often useful to be able to generate these files outside of the standard request/response cycle and it’s much easier to test the result when it’s just plain Ruby.

Service – encapsulates user story logic

Service classes encapsulate the logic of our user stories (or parts of stories). The service classes decrease complexity and coupling in the system by their organization and by providing interaction between the persistence layer (app/models) and the domain logic layer (app/lib). Service classes may invoke other service classes to build more complex behaviors.

Util – code not relevant to our domain

Any code unrelated to our specific domain gets placed here (e.g., I have a class that converts XLSB files to XLS and another that compares hashes). Anything you throw in here might be a good candidate for a gem!

This sounds awesome. How can I do it? (i.e., the code)

Setting up app/lib

Add the following to config/application.rb:

1
config.autoload_paths += %W(#{config.root}/app/lib)

That’s all you need to do in order to begin adding code into app/lib! Here’s what my app/lib looks like:

1
2
integration/    render/     service/    util/
integration.rb  render.rb   service.rb  util.rb

Each folder (module) has a simple file with the same name that creates the module. E.g.,:

service.rb
1
module Service; end

Setting up fast_spec

For the corresponding fast_specs, add a fast_spec directory at the top level of your project. Then add a file fast_spec/spec_fast_helper.rb:

fast_spec/spec_fast_helper.rb
1
2
3
4
5
6
7
8
9
10
11
12
13
require 'bundler'
# require 'other gems from your bundle'

root = File.join(File.dirname(__FILE__), '..')

# Set up any configuration here...
# I18n.load_path = [File.join(root, 'config', 'locales', 'en.yml')]

# Require your app/lib folder. You may need to require some classes first
# depending on how you've set up your class hierarchy.
Dir[File.join(root, 'app', 'lib', '**', '*.rb')].sort.each do |lib|
  require lib
end

I also added a rake task so that I can rake fast:

lib/tasks/000_fast.rake
1
2
3
4
5
6
7
8
9
require 'rspec'

desc "Run fast specs"
RSpec::Core::RakeTask.new(:fast) do |task|
  task.pattern = 'fast_spec/**/*_spec.rb'
  task.rspec_opts = '-Ifast_spec'
end

task :default => :fast

Now in each fast_spec file just require 'spec_fast_helper'. If you like guard, check out guard-fast_spec. And if you haven’t been using guard because your tests have been too slow, try it again. When your tests are this fast you’ll love it.

When I need to include ActiveRecord models in the tests, I just use stubs. The danger here is that your specs become out of sync with your models, but a decent integration test suite should catch these issues.

Since all the complicated logic is contained in fast_spec, the specs in spec are all simple. Really simple. Which means that those specs run decently fast:

1
2
3
4
5
6
7
8
9
10
11
12
$ time rake spec
# output truncated
...............................................................................
...............................................................................
...............................................................................
...............................................................................
..................................

Finished in 6.82 seconds
350 examples, 0 failures

rake spec  19.44s user 1.62s system 93% cpu 22.624 total

And the fast_specs?

1
2
3
4
5
6
7
8
9
10
$ time rake fast
# output truncated
...............................................................................
...............................................................................
........

Finished in 1.09 seconds
166 examples, 0 failures

rake fast  7.91s user 0.77s system 98% cpu 8.798 total

It’s still a young app so there aren’t a ton of specs, but I think you get the idea.

Minimal changes, big results

This simple organizational change (along with the associated rules) makes it easier for me to employ good OOP design principles in my projects that use Rails, yet I’m still able to reap the benefit of Rails (which is plenty!).

Let me know if you like this idea. If there’s interest, I’ll write some follow-up posts with code samples that demonstrate how I’m using this app/lib organization to write cleaner, more modular code.

Stop Writing JS for Nested Attributes

I’ve been re-writing/copying the same JS to deal with adding/removing nested attributes for far too long. On a project I recently started, I was fortunate enough to stumble across patbenatar/jquery-nested_attributes. This jquery plugin makes handling nested objects a cinch.

The simplest usage looks like this:

1
2
3
$("#container").nestedAttributes(->
  bindAddTo: $("#add_another")
)

In the above example, #container refers to a DOM element whose immediate descendants are considered to be sets of nested attributes. When the user clicks the link referenced by #add_another the plugin automatically clones a set of fields and appends to the DOM.

In case you need a bit more flexibility, there are a slew of options available:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
{
  collectionName: false,         // If not provided, we will attempt to autodetect. Provide this for complex collection names
  bindAddTo: false,              // Required unless you are implementing your own add handler (see API below). The single DOM element that when clicked will add another set of fields
  removeOnLoadIf: false,         // Function. It will be called for each existing item, return true to remove that item
  collectIdAttributes: true,     // Attempt to collect Rail's ID attributes
  beforeAdd: false,              // Function. Callback before adding an item
  afterAdd: false,               // Function. Callback after adding an item
  beforeMove: false,             // Function. Callback before updating indexes on an item
  afterMove: false,              // Function. Callback after updating indexes on an item
  beforeDestroy: false,          // Function. Callback before destroying an item
  afterDestroy: false,           // Function. Callback after destroying an item
  destroySelector: '.destroy',   // Pass in a custom selector of an element in each item that will destroy that item when clicked
  deepClone: true,               // Do you want jQuery to deep clone the element? Deep clones preserve events. Undesirable when using BackBone views for each element.
  $clone: null                   // Pass in a clean element to be used when adding new items. Useful when using plugins like jQuery UI Datepicker or Select2. Use in conjunction with `afterAdd`.
}

In my case, I am using select2 to provide some fancy select elements. The only problem was select2 wasn’t playing nice—I couldn’t unbind select2 from the select element and as a result pressing the “add another” link made the form unusable. After a quick iteration with the plugin’s author on github, I learned about the $clone option. You can use $clone to pass in a “clean” element that will get appended to the DOM when the “add another” link is pressed.

To take advantage of the $clone option you just have to get a copy of your DOM element before binding any other JS to it:

1
2
3
4
5
6
7
8
clone = $('.nested-object-fields:first').clone()

$(".container").nestedAttributes(
  bindAddTo: $(".add-another")
  $clone: clone
  afterAdd: (el) ->
  ...
)

Deleting items is just as easy. I got tripped up by having a hidden field for _destroy. You don’t need it! You just need to have an element with "destroy" as the class. When this element is clicked the plugin automatically adds the _destroy hidden field for you.

I can’t imagine an easier way to manage nested attributes. Thanks, @patbenatar, for an awesome plugin.

The Perfect Remote Pair Environment

I recently started working with a remote programmer and quickly realized we needed an effective way to pair remotely. Using a ssh, vim, and tmux (along with some other nifty tools) I was able to set up a powerful pair environment in minutes. Here are the steps:

1. Have a box you can SSH into

I like using Linode (full disclosure: that link has a referral code), but any *nix box you can SSH into will do. All following instructions are to be performed on your remote box, unless otherwise specified.

2. Create a new user account

The first thing I did was add a new user called ‘pair’ to a Linode I use for development.

1
sudo add_user pair

I’ve seen ways to share tmux sessions between two user accounts, but then I cannot let my pair wrap up work on my tmux session without watching them (I have trust issues). But using an account just for pairing means:

  • no worries about pair partner messing with my personal files
  • it’s easy to have config files tweaked for pairing
  • I can let pair partners log in via their public key (they don’t need the password and I can remove their key at any time)

3. Configure tmux / vim

For those who aren’t aware, tmux is a termainal multiplexer—it lets you easily switch between programs in one terminal. It’s a fantastic piece of software and if you haven’t used it before I highly recommend you take a look.

After setting up the user I copied over my vimrc and tmux configurations from my dotfiles repo. One issue is that users (like myself) prefer their own configss. The nice thing about having a separate pair account is that it’s easier to compromise on config changes. After all, it’s not a change to my vimrc, it’s our shared vimrc.

My pair partner is a big fan of the jk smash to switch back to normal mode. When we were pairing, his instincts would take over and he would add ‘jk’ to the end of lines of code. I told him just to add it in to the vimrc, and that was that.

Note: Screen would also work fine instead of tmux. As would emacs or any other terminal editor in place of vim. But tmux/vim is my preference.

4. SSH Keys

No need to give out the password to the pair account. Just ask your pair for their SSH key and copy it over yourself. Paste it into ~/.ssh/authorized_keys within your pair account. If you ever want to revoke their access, just remove their key (note that this won’t close an existing session).

5. Local SSH Config / Port Forwarding

I mostly work on Rails apps. Though we are often just looking at code and running tests while in the pair environment, it’s sometimes helpful to interact with the app in the browser to diagnose issues and check behavior. With port forwarding, we can run the dev Rails server in our pair environment and access it using our browsers locally.

The ssh command to log in with forwarding looks like:

1
ssh pair@server.url -L 3000:127.0.0.1:3000

With this port forwarding, I can open my browser and go to localhost:3000 to access the dev Rails server running on the remote box. That’s quite a bit to type out every time, so I added an alias to my .ssh/config:

1
2
3
4
Host pair
  HostName server.url
  User pair
  LocalForward 3000 127.0.0.1:3000

Wit the alias I can log in with just ssh pair.

6. Git Pairing

When we pair it’s nice to have the git log reflect that we worked together on a piece of code. Fortunately, the git-pairing gem makes this easy. It allows you to pre-define several pair partners who you can identify by their initials. When you want to commit as a pair, just type:

1
git pair initials1 initials2

And when you’re working solo:

1
git solo

Conclusion

The pair environment + a phone call / skype makes working remotely incredibly easy. I’ve found this setup to be just as easy as working side-by-side in person. Perhaps even better, because when we’re done pairing I can logout of the pair server and get right back to working locally.

Sending Emails via Gmail as “Another Person”

Recently, I needed to find a way to send emails from a web-based application so that it seemed, to the recipient, as if the user initiating the action was doing the sending (as opposed to the application). Gmail, however, prevents spoofing of emails: If you set the “from” field to an email address that differs from the actual email address of the account you are using to send, Gmail automatically resets the “from” field to the account email address.

What you can do, however, is tell Gmail what the name should be: “Bob Smith” <app-email-account@example.com>. Then you can also set the reply-to to the user’s actual email address: bob.smith@example.com. Now when an email sent with a from/reply-to as described above arrives in my inbox, I see that I’ve received an email from Bob Smith (from the account app-email-account@example.com, but only if I look at the details). When I hit reply, the “to” field of my email is set to bob.smith@example.com.

To clarify, the from/reply-to fields should look as follows:

1
2
From:     "Bob Smith" <app-email-account@example.com>
Reply-To: bob.smith@example.com

Skip Reloading Fixtures Each Time You Run Your Rails Tests

Everyone wants their tests to run faster. I want my tests to run faster. That’s why I’ve been using parallel when I need to run my whole test suite (or at least a large part of it, e.g., unit tests). And VCR makes it really easy to record HTTP interactions to play back later, which means your tests will run faster and will pass even when you’re not online.

A lot of the time I just want to run a single test file (or single test method)—and this, for me, takes an annoyingly long time. If you have a lot of fixtures like I do, then you may want to read on. Let’s start with the test benchmarks.

1
2
3
4
5
6
7
8
% time ruby test/unit/letter_test.rb
Loaded suite test/unit/letter_test
Started
..........
Finished in 7.705702 seconds.

10 tests, 10 assertions, 0 failures, 0 errors
10.80s user 1.90s system 15.393 total

Fifteen whole seconds to wait for a simple set of unit tests to run! and most of that time is spent just preparing to run the tests. My first step was to use spork, which speeds up testing by preloading the Rails env and forking when you want to run your tests.

1
2
3
4
5
6
7
8
% testdrb test/unit/letter_test.rb
Loaded suite letter_test.rb
Started
..........
Finished in 8.642509 seconds.

10 tests, 10 assertions, 0 failures, 0 errors
0.12s user 0.04s system 9.052 total

Quite an improvement! Spork shaved off about 6s from the total time to run the test just by preloading the Rails env.

But 9s is still along time to run a few simple tests. Digging through test.log I realized an absurd amount of time was being spent loading in fixtures.* The worst part is that there is really no need to reload the fixtures into the DB once they are in there as long as you are using transactional fixtures—I only need Rails to know which fixtures are present so I can easily access the AR objects I need by the names they have in the fixtures. This part doesn’t take long at all. Most of the time loading in fixtures is spent putting them in the DB.

After some digging through ActiveSupport::TestCase and finding my way into ActiveRecord::Fixtures, I realized that if I could stub out the database adapter methods that are used to load in the fixtures then I could get the benefit of having Rails know which fixtures exist without actually spending the time to reload them into the database. Here’s how I modified my test/test_helper.rb to achieve this using mocha (only relevant code shown):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
ENV["RAILS_ENV"] = "test"
require File.expand_path('../../config/environment', __FILE__)
require 'rails/test_help'
require 'mocha'

unless (ENV["SKIP_FIXTURES"]||ENV["SF"]).nil?
  # Make sure to stub whatever adapter corresponds to your test db
  ActiveRecord::ConnectionAdapters::Mysql2Adapter.any_instance.stubs(:delete).returns(true)
  ActiveRecord::ConnectionAdapters::Mysql2Adapter.any_instance.stubs(:insert_fixture).returns(true)
end

class ActiveSupport::TestCase
  fixtures :all
  def setup
    ActiveRecord::ConnectionAdapters::Mysql2Adapter.any_instance.unstub(:delete)
    ActiveRecord::ConnectionAdapters::Mysql2Adapter.any_instance.unstub(:insert_fixture)
  end
  # other stuff...
end

Now when I run my test with the DB methods stubbed out:

1
2
3
4
5
6
7
8
% time SF=true ruby test/unit/letter_test.rb
Loaded suite test/unit/letter_test
Started
..........
Finished in 2.664011 seconds.

10 tests, 10 assertions, 0 failures, 0 errors
7.96s user 1.70s system 10.562 total

And with spork (must start the spork server w/ SF=true):

1
2
3
4
5
6
7
8
% time testdrb test/unit/letter_test.rb
Loaded suite letter_test.rb
Started
..........
Finished in 3.351071 seconds.

10 tests, 10 assertions, 0 failures, 0 errors
0.14s user 0.01s system 3.574 total

So by using spork and skipping loading fixtures every time I was able to go from 15s to ~3.5s. I can live with that. I can be productive with that.

* This app was started a long, long time ago and migrating to factories would be way too painful.