I found myself wanting to test some custom behavior that runs when the user’s
session times out. Instead of sleep User.timeout_in
, here’s the solution we
are using to simulate a user timeout:
1 2 3 4 |
|
I found myself wanting to test some custom behavior that runs when the user’s
session times out. Instead of sleep User.timeout_in
, here’s the solution we
are using to simulate a user timeout:
1 2 3 4 |
|
Don’t just throw all your code in app/models
; don’t just build a Rails app.
Have another directory that contains all of your domain-specific logic. Put some
thought into how you organize the files in that directory. And there’s a rule to
follow: files in this directory must not invoke any Rails-provided methods.
Don’t get me wrong. Rails is fantastic—I love it! But my past Rails projects all eventually suffer from poor architecture and slow tests. It’s not Rails’ fault, it’s mine. I’ve been leaning too heavily on Rails this whole time. Now I’m finally learning how to build great apps that I enjoy working on long beyond the prototype stage.
The key to building an application with Rails that stands the test of time is to not use Rails as the defining feature of the application. Rails is just a delivery mechanism and a persistence layer. The domain logic should be separate from Rails. (Thanks to Bob Martin for motivating me to confront this this issue.)
At the same time, Rails provides a lot of default goodness out of the box. I do not want a lot of extra code that re-invents anything already provided by Rails. The goal is to achieve an improved architecture without sacrificing the ease provided by Rails.
I’ve split this post into two parts (ignoring this intro). The first is a high-level explanation of the architecture that I’m proposing. In the second section, I’ll show some of the code I’m using to get this all to work.
The short version: app/models
contains my ActiveRecord models and app/lib
contains application / domain-specific logic.
The classes in app/models
should be ActiveRecord classes or classes that
extensively use the ActiveRecord API (e.g., query-helper classes).
Additionally, code in app/models
should know nothing of our domain logic.
The classes should be focused only on data retrieval and storage. One way to
think of app/models
classes is as facades to the ActiveRecord API.
I use ActiveRecord models throughout the application, but I don’t write any
code outside of app/models
that directly uses an ActiveRecord method. This
means that, in code outside of app/models
, I only invoke standard field
getters/setters and methods that I’ve defined on the models.
This approach is a balancing act. For example, I let scaffolded controllers invoke ActiveRecord methods—I’m not going to spend time updating the default controllers. This goes to the point of not creating extra work / reinventing the wheel. After all, one could have PORO classes to represent each ActiveRecord model to the rest of the application, but achieving this would require a decent amount of work for what may be an almost entirely academic benefit.
The app/lib
directory contains the business/domain logic. The code in
app/lib
must have no knowledge of Rails—none at all. app/lib
classes
(all POROs) can still use the model classes, but they are not allowed to use any
ActiveRecord API methods (except the field getters/setters). This means direct
invocation of save
, create
, update_attributes
, all querying, etc. is off
limits.
By separating the application logic into a separate directory, I’ve found it
much easier to make good architectural decisions. A nice benefit of this
approach is that it makes it very easy to write
fast_specs since I
know that code in app/lib
does not have dependencies on Rails or the database.
All models in app/lib
have their specs located in a fast_spec
directory.
The classes within app/lib
are organized into modules. The modules you define
will vary depending on your domain, but I’ll share a few of mine from a project
I’m working on:
Integration classes are responsible for interacting with remote data sources. These classes have no dependencies on our other classes. The external sources may be remote APIs, uploaded data files, etc.
Classes in the render module are responsible for rendering non-HTML/JSON views (e.g., render a PDF, XLS, etc). It is often useful to be able to generate these files outside of the standard request/response cycle and it’s much easier to test the result when it’s just plain Ruby.
Service classes encapsulate the logic of our user stories (or parts of stories).
The service classes decrease complexity and coupling in the system by their
organization and by providing interaction between the persistence layer
(app/models
) and the domain logic layer (app/lib
). Service classes may
invoke other service classes to build more complex behaviors.
Any code unrelated to our specific domain gets placed here (e.g., I have a class that converts XLSB files to XLS and another that compares hashes). Anything you throw in here might be a good candidate for a gem!
app/lib
Add the following to config/application.rb:
1
|
|
That’s all you need to do in order to begin adding code into app/lib
! Here’s
what my app/lib
looks like:
1 2 |
|
Each folder (module) has a simple file with the same name that creates the module. E.g.,:
1
|
|
fast_spec
For the corresponding fast_specs, add a fast_spec
directory at the top level
of your project. Then add a file fast_spec/spec_fast_helper.rb
:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
|
I also added a rake task so that I can rake fast
:
1 2 3 4 5 6 7 8 9 |
|
Now in each fast_spec file just require 'spec_fast_helper'
. If you like guard,
check out guard-fast_spec. And if
you haven’t been using guard because your tests have been too slow, try it
again. When your tests are this fast you’ll love it.
When I need to include ActiveRecord models in the tests, I just use stubs. The danger here is that your specs become out of sync with your models, but a decent integration test suite should catch these issues.
Since all the complicated logic is contained in fast_spec
, the specs in spec
are all simple. Really simple. Which means that those specs run decently fast:
1 2 3 4 5 6 7 8 9 10 11 12 |
|
And the fast_specs
?
1 2 3 4 5 6 7 8 9 10 |
|
It’s still a young app so there aren’t a ton of specs, but I think you get the idea.
This simple organizational change (along with the associated rules) makes it easier for me to employ good OOP design principles in my projects that use Rails, yet I’m still able to reap the benefit of Rails (which is plenty!).
Let me know if you like this idea. If there’s interest, I’ll write some
follow-up posts with code samples that demonstrate how I’m using this app/lib
organization to write cleaner, more modular code.
I’ve been re-writing/copying the same JS to deal with adding/removing nested attributes for far too long. On a project I recently started, I was fortunate enough to stumble across patbenatar/jquery-nested_attributes. This jquery plugin makes handling nested objects a cinch.
The simplest usage looks like this:
1 2 3 |
|
In the above example, #container
refers to a DOM element whose immediate
descendants are considered to be sets of nested attributes. When the user
clicks the link referenced by #add_another
the plugin automatically clones a
set of fields and appends to the DOM.
In case you need a bit more flexibility, there are a slew of options available:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
|
In my case, I am using select2 to provide some fancy select elements. The only
problem was select2 wasn’t playing nice—I couldn’t unbind select2 from the
select element and as a result pressing the “add another” link made the form
unusable. After a quick iteration with the plugin’s author on github, I learned
about the $clone
option. You can use $clone
to pass in a “clean” element
that will get appended to the DOM when the “add another” link is pressed.
To take advantage of the $clone
option you just have to get a copy of your
DOM element before binding any other JS to it:
1 2 3 4 5 6 7 8 |
|
Deleting items is just as easy. I got tripped up by having a hidden field for
_destroy
. You don’t need it! You just need to have an element with
"destroy"
as the class. When this element is clicked the plugin automatically
adds the _destroy
hidden field for you.
I can’t imagine an easier way to manage nested attributes. Thanks, @patbenatar, for an awesome plugin.
I recently started working with a remote programmer and quickly realized we needed an effective way to pair remotely. Using a ssh, vim, and tmux (along with some other nifty tools) I was able to set up a powerful pair environment in minutes. Here are the steps:
I like using Linode (full disclosure: that link has a referral code), but any *nix box you can SSH into will do. All following instructions are to be performed on your remote box, unless otherwise specified.
The first thing I did was add a new user called ‘pair’ to a Linode I use for development.
1
|
|
I’ve seen ways to share tmux sessions between two user accounts, but then I cannot let my pair wrap up work on my tmux session without watching them (I have trust issues). But using an account just for pairing means:
For those who aren’t aware, tmux is a termainal multiplexer—it lets you easily switch between programs in one terminal. It’s a fantastic piece of software and if you haven’t used it before I highly recommend you take a look.
After setting up the user I copied over my vimrc and tmux configurations from my dotfiles repo. One issue is that users (like myself) prefer their own configss. The nice thing about having a separate pair account is that it’s easier to compromise on config changes. After all, it’s not a change to my vimrc, it’s our shared vimrc.
My pair partner is a big fan of the jk smash to switch back to normal mode. When we were pairing, his instincts would take over and he would add ‘jk’ to the end of lines of code. I told him just to add it in to the vimrc, and that was that.
Note: Screen would also work fine instead of tmux. As would emacs or any other terminal editor in place of vim. But tmux/vim is my preference.
No need to give out the password to the pair account. Just ask your pair for their SSH key and copy it over yourself. Paste it into ~/.ssh/authorized_keys within your pair account. If you ever want to revoke their access, just remove their key (note that this won’t close an existing session).
I mostly work on Rails apps. Though we are often just looking at code and running tests while in the pair environment, it’s sometimes helpful to interact with the app in the browser to diagnose issues and check behavior. With port forwarding, we can run the dev Rails server in our pair environment and access it using our browsers locally.
The ssh command to log in with forwarding looks like:
1
|
|
With this port forwarding, I can open my browser and go to localhost:3000
to
access the dev Rails server running on the remote box. That’s quite a bit to
type out every time, so I added an alias to my .ssh/config
:
1 2 3 4 |
|
Wit the alias I can log in with just ssh pair
.
When we pair it’s nice to have the git log reflect that we worked together on a piece of code. Fortunately, the git-pairing gem makes this easy. It allows you to pre-define several pair partners who you can identify by their initials. When you want to commit as a pair, just type:
1
|
|
And when you’re working solo:
1
|
|
The pair environment + a phone call / skype makes working remotely incredibly easy. I’ve found this setup to be just as easy as working side-by-side in person. Perhaps even better, because when we’re done pairing I can logout of the pair server and get right back to working locally.
Recently, I needed to find a way to send emails from a web-based application so that it seemed, to the recipient, as if the user initiating the action was doing the sending (as opposed to the application). Gmail, however, prevents spoofing of emails: If you set the “from” field to an email address that differs from the actual email address of the account you are using to send, Gmail automatically resets the “from” field to the account email address.
What you can do, however, is tell Gmail what the name should be: “Bob Smith” <app-email-account@example.com>. Then you can also set the reply-to to the user’s actual email address: bob.smith@example.com. Now when an email sent with a from/reply-to as described above arrives in my inbox, I see that I’ve received an email from Bob Smith (from the account app-email-account@example.com, but only if I look at the details). When I hit reply, the “to” field of my email is set to bob.smith@example.com.
To clarify, the from/reply-to fields should look as follows:
1 2 |
|
Everyone wants their tests to run faster. I want my tests to run faster. That’s why I’ve been using parallel when I need to run my whole test suite (or at least a large part of it, e.g., unit tests). And VCR makes it really easy to record HTTP interactions to play back later, which means your tests will run faster and will pass even when you’re not online.
A lot of the time I just want to run a single test file (or single test method)—and this, for me, takes an annoyingly long time. If you have a lot of fixtures like I do, then you may want to read on. Let’s start with the test benchmarks.
1 2 3 4 5 6 7 8 |
|
Fifteen whole seconds to wait for a simple set of unit tests to run! and most of that time is spent just preparing to run the tests. My first step was to use spork, which speeds up testing by preloading the Rails env and forking when you want to run your tests.
1 2 3 4 5 6 7 8 |
|
Quite an improvement! Spork shaved off about 6s from the total time to run the test just by preloading the Rails env.
But 9s is still along time to run a few simple tests. Digging through test.log I realized an absurd amount of time was being spent loading in fixtures.* The worst part is that there is really no need to reload the fixtures into the DB once they are in there as long as you are using transactional fixtures—I only need Rails to know which fixtures are present so I can easily access the AR objects I need by the names they have in the fixtures. This part doesn’t take long at all. Most of the time loading in fixtures is spent putting them in the DB.
After some digging through ActiveSupport::TestCase and finding my way into ActiveRecord::Fixtures, I realized that if I could stub out the database adapter methods that are used to load in the fixtures then I could get the benefit of having Rails know which fixtures exist without actually spending the time to reload them into the database. Here’s how I modified my test/test_helper.rb to achieve this using mocha (only relevant code shown):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
|
Now when I run my test with the DB methods stubbed out:
1 2 3 4 5 6 7 8 |
|
And with spork (must start the spork server w/ SF=true):
1 2 3 4 5 6 7 8 |
|
So by using spork and skipping loading fixtures every time I was able to go from 15s to ~3.5s. I can live with that. I can be productive with that.
* This app was started a long, long time ago and migrating to factories would be way too painful.