jueves, 1 de marzo de 2012

Gemfile.local

Hey guys,

From a while now, I've been looking for a way to add local dependencies to gems within my Rails project, but bundler is not defined to allow that by default :P

For me, as I work on a distributed team, with different operating systems and all, there are times that gems need to be different, one really silly sample is growl (a notification tool that is only available on Mac, but I use Linux so I always have to remove that dependency manually...)

Finally, I reach this post:

http://madebynathan.com/2010/10/19/how-to-use-bundler-with-plugins-extensions/

Which give me a really simple idea to accomplish what I wanted. Basically you can add this to the end of your Gemfile:


Dir.glob(File.join(File.dirname(__FILE__), "Gemfile.local")) do |gemfile|
eval(IO.read(gemfile), binding)
end


And then having Gemfile.local ignored on .gitignore, from there is trivial.

You just need to add any local dependency you need on Gemfile.local.

Hope it is useful for anybody out there ;)

miércoles, 30 de junio de 2010

Ruby Time Tracker

Hi guys,

If you are one of those people that works as freelancer or want to keep track of your time for some motive, then this post may interest you.

There was a point in time that I used web sites (free or paid) to track my working time, but I got annoyed because of the time loose reaching those pages or the network issues you might have while using them... and as you may agree with me, nothing is much efficient that using command line tools. So, I've just write a new gem called RTT to track time from the command line. Here is how I use it all the time:

$ rtt 'new task description' (when I started the timer)

To stop (or pause) it, just type:

$ rtt stop (pause)

Then if you've a task already paused, then you can:

$ rtt resume

Or, to restart an existing task already finished, you can type:

$ rtt 'task description'

Then the new time, will be sum to the existing accumulated time for that task.

Also, I always want to check how I'm doing today, so I've type this:

$ rtt list

And I get and output like this:

Task List
=========
Task: || Client: || Project: || User: || Elapsed time: 0h58m
Task: || Client: || Project: || User: || Elapsed time: 2h20mm
Task: || Client: || Project: || User: || Elapsed time: 0h25m
...

Having a time tracking tool for the command line was the main propose to gain time, and avoid all network issues for tracking time using web sites. But also, I've included in this gem a way to generate PDF's documents (or invoices if you will), so that it is easier for you to send your worked hours, using this command:

$ rtt report

So, If it sounds like it feet for you, you can install it as usual, with:

$ [sudo] gem install rtt

There are several more options for filtering tasks for the list, delete or report command, you can check for those here: http://github.com/marklazz/rtt

Enjoy!

jueves, 18 de marzo de 2010

Substracting sets with SQL

Hey guys,

Long time no see!

I've wanted to mentioned about an sql's trick that I used on my current project and was pretty handy!

The Problem

Let's say you have a table of users and wanted to write a query to retrieve the rows that meet certain criteria and doesn't meet other... Also, we don't want to use NOT EXISTS statement (as it is too expensive in terms of performance). In short, you want something like this:

You_want = Some_Set_Users (I) - Another_Users_Set (II)

This can be accomplished by simple execute a couple of queries, one for (I) and another for (II) and just subtract them using the programming language. For instance, in Rails you just can do the following:

query_1 = User.find(..conditions for set (I)..)
query_2 = User.find(..conditions for set (II)..)
..and then just using ruby..
result = query_1 - query_2

But, this has a couple of drawbacks:

1) This would load all records into memory which it not good, as it takes time and occupies valuable space
2) And two more importantly in most cases, it's throughput is worse than doing it directly on the database.

So, which is the alternative ?

The Solution

Although, joining tables is not the cheapest operation available for a database, still it does the job pretty efficiently and it can be tunned to be pretty fast depending on your needs.

So, in this case, I propose to use the following strategy:

select some_users_set.* from users as some_users_set LEFT JOIN
(select inner_another_users_set from inner_users as another_users_set where ..conditions for another_users_set...) as another_users_set on some_users_set.id = another_users_set.id
where ..conditions for some_users_set.... AND another_users_set.id IS NULL


The Explanation

The idea behind this sql is to do a LEFT JOIN of the two sets (I and II) that we want to make the subtraction.

This way, as a LEFT JOIN's operation will always match for every row on both sets, when there is no row on set II for a given row on I, we will get all fields for set II filled with NULL (as there is no correspondent on the join). And those cases, are exactly the ones we are looking for, the ones that exists on set I but there is no correspondent on set II.

I find this trick very useful, I hope you could benefit from it too!

Let me know your thoughts, and obviously any questions if there is something unclear!

See ya!

martes, 30 de junio de 2009

campfire-pattern-alert

People,

I wanted to share with you a simple script published as open source: (git://github.com/mgiorgi/campfire-pattern-alert.git)

I developed this software because I wanted some kind of alerts, when people mentioned my name in a Campfire's room. In fact, our team used to work that way, when someone wanted to talk to me, they just typed: 'Marcelo'. And I supposed to check that messages below that.

For that, I though, it would be great if there were a Desktop tool that can connect to campfire and just alert me when that happened. But, unfortunately, for Linux users(*), like me, there is no good solution to filter the irrelevant messages and alert me with the ones I want.

So, I decided to came up with this tiny tool that does exactly that job!

An example of the usage of it (after configuring the proper jabber/campfire connection information in jabber-configuration.rb and campfire-confirguration.rb respectivly):

$ ./campfire-listener.sh Marcelo JABBER

Then after a couple of logged messages, every message that contained 'Marcelo' would be sent to the jabber account that I want. In fact, I decided to forward the following 5 messages (this amount can be changed within a constant in campfire-listener.sh) after an occurrence of 'Marcelo' (in this case).

General usage would look like: ./campfire-listener.sh pattern output.

Where 'pattern' could be a regular expression and 'output' stands for STDOUT (if output parameter is not set, this is considered the default value) or JABBER.

Well, hope it helps you, as well it helped me! Let me know what you think ;)

(*): I knew that such a solution exists for Mac OS, for instance.

lunes, 8 de junio de 2009

A first look at God and Monit

Hey people!,

Lately I've been configuring a production server which basically is composed of a mongrel cluster and some other daemons that we use (backgroundrb, i.e.). To keep alive both mongrel instances and our daemons we needed some monitor tool to help us know when something went wrong and restart those processes if needed.

For that matter, I investigate the following tools: Monit and God.

Monit

Monit (http://mmonit.com/monit/) is a great utility for managing and monitoring, processes, files, directories and devices on a Unix system.

Installation

It's very straight-forward:
  1. Download the latest version on http://mmonit.com/monit/download/
  2. tar zxvf monit-x.y.z.tar.gz
  3. cd monit-x.y.z
  4. ./configure
  5. make && sudo make install (sudo if necessary)
Usage

After installing it, we need to create a configuration file. Monit would look for this file in the following locations:
  • ~/.monitrc
  • /etc/monitrc
  • /.monitrc
You can choose where you want to locate that configuration file. As an example, below I show the configuration file I used for my mongrel cluster:

#.monitrc
set daemon 30
set logfile /home/myuser/monit/monit.log
set httpd port 9111
allow remotehost
allow admin:admin # Allow Basic Auth
check process mongrel_cluster_3010 with pidfile "/var/www/myapp/current/tmp/pids/mongrel.3010.pid"
start program = "/usr/local/bin/ruby /usr/local/bin/mongrel_rails start -d -e production -a 127.0.0.1 -c /var/www/myapp/current --user myuser --group deploy -p 3010 -P tmp/pids/mongrel.3010.pid -l log/mongrel.3010.log"
stop program = "/usr/local/bin/ruby /usr/local/bin/mongrel_rails stop -p 3010 -P /var/www/myapp/current/tmp/pids/mongrel.3010.pid"
if failed port 3010 protocol http # check for response
with timeout 10 seconds
then restart

group mongrel
check process mongrel_cluster_3011 with pidfile "/var/www/myapp/current/tmp/pids/mongrel.3011.pid"
start program = "/usr/local/bin/ruby /usr/local/bin/mongrel_rails start -d -e production -a 127.0.0.1 -c /var/www/myapp/current --user lokkedc --group deploy -p 3011 -P tmp/pids/mongrel.3011.pid -l log/mongrel.3011.log"
stop program = "/usr/local/bin/ruby /usr/local/bin/mongrel_rails stop -p 3011 -P /var/www/myapp/current/tmp/pids/mongrel.3011.pid"
if failed port 3011 protocol http # check for response
with timeout 10 seconds
then restart
group mongrel


You can check the complete documentation of the commands allowed for monit in http://mmonit.com/monit/documentation/monit.html.

But I just wanted to show you a simple example that checks for the availability of the mongrel instances (in this case I just configured a couple of those). As you may notice, I have to configure each instance separatly (besides they both belongs to the same group, so I can start/stop all of those at the same time).

Other thing to notice, is the events that I want to monitor. In this case I've configured to restart a given mongrel's instance (the fragment that takes care of this is highlighted in the code below) when there is no response from it. But there are many other events, such us memory usage, cpu time, that can be measured to alert the administrator (vía email).

Run it!

To start it we just execute: moint. And we can see how the logs starts to populate:

[UYT Jun 5 19:12:15] info : monit: generated unique Monit id 15fdb0bdb830b0a114a3831d995ec32e and stored to '/home/myuser/.monit.id'
[UYT Jun 5 19:12:15] info : Starting monit daemon with http interface at [*:9111]
[UYT Jun 5 19:12:15] info : Starting monit HTTP server at [*:9111]
[UYT Jun 5 19:12:15] info : monit HTTP server started
[UYT Jun 5 19:12:15] info : 'willy' Monit started
[UYT Jun 5 19:12:34] info : Shutting down monit HTTP server
[UYT Jun 5 19:12:35] info : monit HTTP server stopped
[UYT Jun 5 19:12:35] info : monit daemon with pid [21560] killed
[UYT Jun 5 19:12:35] info : 'willy' Monit stopped
[UYT Jun 5 19:13:42] info : Starting monit daemon with http interface at [*:9111]
[UYT Jun 5 19:13:42] info : Starting monit HTTP server at [*:9111]

One of the greatest things about monit is that it offers a web interface (as you may notice from the configuration file). Which looks pretty nice!





God

God its a newer tool written in ruby and available as a rubygem. For this matter, it is easier to install it also (i.e. [sudo] gem install god).

Usage

We need to create a configuration file, which is entirely written in ruby. This is one the best things about God, because it helps us reduce duplication among configurations, as you can see in the following example (which is intended to monitor the same mongrel cluster mentioned above):

#myapp.god
RAILS_ROOT = "/var/www/myapp/current"
%w{3010 3011}.each do |port| God.watch do |w|
w.name = "mongrel_cluster_#{port}"
w.group = 'mongrels'
w.interval = 30.seconds
w.start = "mongrel_rails start -c #{RAILS_ROOT} -p #{port} \
-P #{RAILS_ROOT}/tmp/pids/mongrel.#{port}.pid -d -e production"
w.stop = "mongrel_rails stop -P #{RAILS_ROOT}/tmp/pids/mongrel.#{port}.pid -e production"
w.restart = "mongrel_rails restart -P #{RAILS_ROOT}/tmp/pids/mongrel.#{port}.pid -e production" w.start_grace = 10.seconds
w.restart_grace = 10.seconds

w.pid_file = File.join(RAILS_ROOT, "tmp/pids/mongrel.#{port}.pid")

w.behavior(:clean_pid_file)

w.start_if do |start| start.condition(:process_running) do |c|
c.interval = 5.seconds

c.running = false
end
end

end


As you can see, I can refactor the mongrel's configuration into just one block which is instantiated for each mongrel's port. Besides that, I've also reduce the amount of configuration using RAILS_ROOT constant in many settings (reducing error prune also).

Run It!

To start our God monitoring tool, we just exectue: god -c myapp.god

After which, we can see the log using the following command: god log mongrels (to show the logs relative to our group of mongrel instances). Which should display something like this:

I [2009-06-08 12:59:16] INFO: mongrel_cluster_3010 move 'unmonitored' to 'up'
I [2009-06-08 12:59:16] INFO: mongrel_cluster_3010 moved 'unmonitored' to 'up'
I [2009-06-08 12:59:16] INFO: mongrel_cluster_3010 [trigger] process is not running (ProcessRunning)
I [2009-06-08 12:59:16] INFO: mongrel_cluster_3010 move 'up' to 'start'
I [2009-06-08 12:59:16] INFO: mongrel_cluster_3010 before_start: deleted pid file (CleanPidFile)
I [2009-06-08 12:59:16] INFO: mongrel_cluster_3010 start: mongrel_rails start -c /var/www/myapp/current -p 3010 -P /var/www/myapp/current/tmp/pids/mongrel.3010.pid -d -e production
I [2009-06-08 12:59:27] INFO: mongrel_cluster_3010 moved 'up' to 'up'
I [2009-06-08 12:59:27] INFO: mongrel_cluster_3010 [ok] process is running (ProcessRunning)

Conclusion - My Choice

Any of those tools gets the job done pretty well. But I decided to use monit, because it is easier/faster for me to manage the process using a web-interface instead of logging into the server via ssh.

Besides that, God seems to be a great tool as it provides a way to reduce duplication (because its configuration is defined with ruby code).

jueves, 26 de marzo de 2009

Installing Rails Full-stack testing environment: Cucumber + Selenium

Hey people!,

I've just had to configure an environment for running both plain Cucumber features and enhanced features (the ones that relies on Javascript) with the help of Selenium, so I decide to centralized the information I found in order to install the full-testing stack.

Environment

First of all, I'll be working on Rails 2.3 environment. And I'll assume that you got Rails/Rspec already installed since I had covered that in a previous entry. Also it would help to check http://wiki.github.com/dchelimsky/rspec/configgem-for-rails to fix issues related to Rails > = 2.2 and Rspec.

Install cucumber is really easy, you just got to follow the instructions of the cucumber homepage: http://wiki.github.com/aslakhellesoy/cucumber/ruby-on-rails

Selenium

Selenium is a suite of tools, developed by Thoughtworks, to automate web applications testing across many platforms (http://seleniumhq.org/). In particular, it provides us the capability to run features written with Cucumber's semantics within a real web browser. That gives us the benefit to run features that we couldn't test without a browser (i.e. Javascript dependent functionality).

Install Selenium within Rails

1) First of all we need to follow this document: http://wiki.github.com/aslakhellesoy/cucumber/setting-up-selenium (I). As you can see from there you'll need Java > = 1.5 installed in order for selenium server to work.

2) Then you should install selenium-client gem for our Rails application, as usual:

sudo gem install selenium-client

3) After that, we need to install selenium-on-rails plugin. I used the following repository for that matter: http://github.com/paytonrules/selenium-on-rails/tree/master (because in this one some issues with Rails 2.x are solved)

Setting up a test environment in our Rails application

As you can see from (I), it would be nice to organize selenium features and non-selenium features in different directories, as they need different libraries and initialization. So, I decide to organize it like this:

features/
|-- common
| -- step_definitions
| `-- common_steps.rb
|-- selenium
| -- step_definitions
| `-- browser_dependent_steps.rb
| -- support
| `-- env.rb
| `-- some_ajax.feature
|-- plain
| -- step_definitions
| `-- webrat_steps.rb
| `-- non_ajaxy.feature
`-- support
`-- env.rb

This way, I maintain the support/env.rb file for general environment configuration. But I used selenium/support/env.rb for the selenium's features. The same concept applies to step_definitions that are separated in: common (used by plain & selenium features), plain (non-selenium features) and the ones for selenium.

This separation helps a lot for creating appropiate tasks to run cucumber. I used the following guide to create those tasks http://gist.github.com/73771.

Transactional fixtures

When running test, by default, Rails doesn't save our record into the database (use_transactional_fixtures = false) to isolate test which is a good thing (for our specs for instance). But, for selenium we want to save them! Because selenium server and web server runs in different process (so they don't have visibility for our saved records!). See http://wiki.github.com/aslakhellesoy/cucumber/troubleshooting

As it is suggested there, I used database_cleaner (http://github.com/bmabey/database_cleaner/tree/master) to manage transactions for selenium feature's setup.

Issue with sqlite3

Besides that, I've found some complications to run selenium features with sqlite3. In particular, the problem is that I couldn't disable transactional fixtures for that. But using Mysql worked perfectly for me.

Selenium API

For information on Selenium and the methods provided by SeleniumDriver class that represent our browser you can check the following links:

http://seleniumhq.org/documentation/core/reference.html

http://selenium.rubyforge.org/rdoc/classes/Selenium/SeleniumDriver.html

Other interesting tool is Selenium IDE which is a Firefox plugin that can record your actions and write the correspondence script using Selenium API (that can be re-used within your tests)

See ya!

domingo, 30 de noviembre de 2008

Cucumber vs StoryRunner

Hi guys!,

I'm exploring the capabilities provided by this beautiful framework, as It is going to be used by our team at work. So I decided, to take advantage of that, and study it a little deeper and share with you which my impressions are.

Installation

This is just typing:

gem install cucumber

And to install in your Rails project, execute this line on your vendor/plugins directory:

git clone git://github.com/aslakhellesoy/cucumber.git

Features

So the first important difference to notice is that Cucumber is about features (not .stories). The idea behind this is to write the old stories in a more BDD way. So, for each new feature that you have to develop, you should first write a file on the features folder (and with .feature extension), and specify the new functionality on it, in such a way it would fail. When the feature is fully implemented, the test should be successful.

You can execute a cucumber features like this:

* cucumber features/my_new_feature.feature or by executing:

* rake features if you configure the Rakefile of you application with the following code:

require 'cucumber/rake/task'
Cucumber::Rake::Task.new


Contents of features files

Cucumber is about Features (instead of StoryRunner's Stories). Inside a Feature, in a similar fashion as StoryRunner, we got to define an Scenario. So one of the first lines of that file should be an Scenario with a key that identifies it. Then every Scenario is composed by a list of one of those keywords: Given, Then, When (same as StoryRunner) AND But.

Here is a simple example of a Cucumber's feature:

Feature: Serve coffee
In order to earn money
Customers should be able to
buy coffee at all times

Scenario: Buy last coffee
Given there are 1 coffees left in the machine
And I have deposited 1$
When I press the coffee button
Then I should be served a coffee


The definition of what to do within each step (a Given, Then, When or But sentence) is loaded from steps's files in a similar way StoryRunner does. But, Cucumber have an important difference here: they are written using Regular Expressions instead of strings. So you can do something like this:

When /(\w+) enters (his|her) email with "(\S+\@\S+)"/ do |name, email|
fills_in field_name, :with => User.find_by_name(name).email
end


As you can see, using Regular Expressions give us more expressiveness while writing our features.

Until this point, every seems pretty much the same as StoryRunner, but I found a big improvement here:

1) Steps are automatically available from step_definitions (or any sub-folder of features) folder. This is a big difference for reutilization compared with StoryRunner which requires to import explicitly any step's files needed by the story.

2) There's a relatively new feature which let's you invoke steps from within other steps. For example, we can have some common steps like: /Given a user with login '(\S+)'/, /And is not logged in/, among others that can be reused in many features (we probably need our user logged in to use most of our site functionality). Then each we can invoke this steps within other steps, like this:

#features/steps/login_steps.rb
Given /a user with login '(\S+)'/ do
..
end
Given /a password '(\S+)'/ do
..
end
Given /is not logged in/ do
..
end

#features/steps/somefile_steps.rb
Given /User (\S+) with password (\S+) is logged in/ do |username, pass|
Given "a user with login #{username}"
Given "a password #{pass}"
Given is not logged in
end

#features
Feature: Some feature

Scenario: Some scenario description

Given User pepe with password pepe is logged in
And ..
Then ..

This way, we can reuse most of our steps, making our test more DRY.

3) Finally, it is possible to define our stories in a different language than English. But as I won't be using this for now, I didn't realize how interesting this might be..as most of my work is written in English, but you never know :P.

That's enough for now, talk to you later ;)