Since the 0.9 version, Docker is shipped with the libcontainer execution driver and the containers can be accessed with the nsenter util (e.g. you don't need to install SSH in a container anymore!).

Nsenter is included in the util-linux package, from version 2.23.

If your distribution has an older versione of util-linux, you can compile it:

~$ curl | tar -zxf-
~$ cd util-linux-2.24
~$ ./configure --without-ncurses
~$ make nsenter
~$ sudo cp nsenter /usr/local/bin

To enter a container you need to know its pid, which can be found with docker inspect knowing its ID:

~$ PID=$(docker inspect --format '{{.State.Pid}}' CONTAINER_ID)

Using the PID you can then enter the container:

~$ sudo nsenter --target $PID --mount --uts --ipc --net --pid /bin/bash

If you don't specify which program launch inside the container, ${SHELL} is run. I prefer to specify it (/bin/bash) because I use ZSH but I don't usually want to to install it inside the containers.

In the last post I suggested a minimal setup to begin with ruby-based TDD. In this post I want to show a possibile minimal setup for node.js-based TDD (node.js and npm must be installed). The kata will be again The Game of Life.

I'm not an expert of Node.js, so I hope what I'm writing is correct :)

I'll use the test framework Mocha and expect.js, a "Minimalistic BDD-style assertions for Node.JS and the browser".

Let's begin with the package.json file which will tell to npm what to install:

  "name": "game-of-life",
  "version": "0.0.1",
  "dependencies": {
    "mocha": "*",
    "expect.js": "*"

With this file in the project folder you can run npm install to install the libraries. Then create the test/ folder with the mocha.opts file, where you can specify various options, like the reporter to use:

--reporter spec

With this file in place, the mocha command will launch the test.
So write the minimal js file and its corresponding test file:


var expect = require('expect.js'),  
  GameOfLife = require('../lib/game_of_life');

describe('Universe', function(){  
  it('should have an initial size', function() {
    var u = new GameOfLife(6)


function GameOfLife(side){  
  this.size = side * side;
GameOfLife.prototype.getSize = function() { return this.size; }

module.exports = GameOfLife;  

Now launch mocha and the first test should pass:

~$ mocha

    ✓ should have an initial size 

  1 passing

To know something more about TDD and Node.js, start reading this post from Azat Mardan

In the last weeks with the guys of the Firenze Ruby Social Club we started to think about organizing some Code Katas to play with test-driven development and yesterday we met to play with The Game of Life.

We used Ruby and RSpec with a simple setup that I want to report here if you'd like to play with some katas.

Although probably a Kata exercise won't need many gems, in ruby projects I like to always use a Gemfile with the required gems:

ruby '2.1.2'  
source ''  
gem 'rspec'  

After a bundle install you're ready to start writing some code (if you don't have the bundle command, install the bundler gem with gem install bundler).

A basic example to begin TDD with the Game of Life could be this:


require './game_of_life'

describe GameOfLife::Universe do  
  it "should have an initial size" do
    u =
    expect(u.size).to eq(36)


module GameOfLife  
  class Universe
    attr_reader :size
    def initialize(side)
      @size = side**2

With this minimalistic setup, the test passes:

~$ bundle exec rspec --color game_of_life_spec.rb

Finished in 0.00095 seconds (files took 0.0966 seconds to load)  
1 example, 0 failures  

You're now ready to start playing with TDD in Ruby :)

If you deploy a rails app forgetting to configure logs automatic rotation, few weeks later won't be difficult to find something like this:

$ ls -lh log/production.log
  -rw-rw-r-- 1 www-data www-data 93,2M apr 10 17:49 production.log

Think if you have to find some error log inside a 100MB file, not easy... :)

Setting log rotation isn't difficult at all. I know two main ways.

Use syslog

This is a really easy solution. Rails will use standard syslog as logger, which means the logs will rotate automatically.

Open config/environments/production.rb and add this line:

config.logger =  

If you want to avoid your logs to be mixed with system logs you need to add some parameters:

config.logger ='/var/log/<APP_NAME>.log')  

Use logrotate

This is the cleaner way, but requires to create a file in the server, inside the /etc/logrotate.d/ folder. This is a possible content of the /etc/logrotate.d/rails_apps file:

/path/to/rails/app/log/*.log {
    rotate 28

The copytruncate option is required unless you want to restart the rails app after log rotation. Otherwise the app will continue to use the old log file, if it exists, or will stop logging (or, worse, will crash) if the file is deleted.
Below the copytruncate details from the logrotate man page:

      Truncate  the  original log file in place after creating a copy,
      instead of moving the old log file and optionally creating a new
      one,  It  can be used when some program can not be told to close
      its logfile and thus might continue writing (appending)  to  the
      previous log file forever.  Note that there is a very small time
      slice between copying the file and truncating it, so  some  log-
      ging  data  might be lost.  When this option is used, the create
      option will have no effect, as the old log file stays in  place.

To check the logrotate script you can use the logrotate command with the debug (-d) option, which executes a dry-run:

sudo logrotate -d /etc/logrotate.d/rails_apps  

If everything seems ok you can wait until the next day or manually launch the rotation with:

sudo logrotate -v /etc/logrotate.d/rails_apps  

If you, like me, have a lot of ruby apps and want to check if the code is vulnerable, Codesake::Dawn could be a useful gem.

This gem supports Rails, Sinatra and Padrino apps. To install it in a Rails app, add the gem to the development group in Gemfile:

group :development do  
  gem 'codesake-dawn', require: false

then run bundle install.
Now add this line in the Rakefile:

require 'codesake/dawn/tasks'  

Install finished. To check the app you just have to run rake dawn:run:

~$ rake dawn:run
15:27:03 [*] dawn v1.1.0 is starting up  
15:27:04 [$] dawn: scanning .  
15:27:04 [$] dawn: rails v4.0.3 detected  
15:27:04 [$] dawn: applying all security checks  
15:27:04 [$] dawn: 171 security checks applied - 0 security checks skipped  
15:27:04 [$] dawn: 1 vulnerability found  
15:27:04 [!] dawn: Owasp Ror CheatSheet: Session management check failed  
15:27:04 [$] dawn: Severity: info  
15:27:04 [$] dawn: Priority: unknown  
15:27:04 [$] dawn: Description: By default, Ruby on Rails uses a Cookie based session store. What that means is that unless you change something, the session will not expire on the server. That means that some default applications may be vulnerable to replay attacks. It also means that sensitive information should never be put in the session.  
15:27:04 [$] dawn: Solution: Use ActiveRecord or the ORM you love most to handle your code session_store. Add "Application.config.session_store :active_record_store" to your session_store.rb file.  
15:27:04 [$] dawn: Evidence:  
15:27:04 [$] dawn:     In your session_store.rb file you are not using ActiveRercord to store session data. This will let rails to use a cookie based session and it can expose your web application to a session replay attack.  
15:27:04 [$] dawn:     {:filename=>"./config/initializers/session_store.rb", :matches=>[]}  
15:27:04 [*] dawn is leaving  

Since MySQL v.5.5.29 the mysqldump command can generate the following error:

-- Warning: Skipping the data of table mysql.event. Specify the --events option explicitly.

An example of mysqldump for full dumps is this:

mysqldump --opt -u <USERNAME> -p<PASSWORD> --all-databases | gzip > full_dump.sql.gz  

In a case of a server which executes it during periodical backups, this means a warning email from Cron Daemon, very annoying.

If you add, as suggested, the --events option you can receive this error:

mysqldump: Couldn't execute 'show events':  
  Access denied for user '<USERNAME>'@'localhost' to database '<DATABASE>' (1044)

The solution is to grant the EVENT permission to the user:

GRANT EVENT ON <DATABASE>.* to '<USERNAME>'@'localhost' with grant option;  

Apparently if you don't care about events there's no way to suppress the error message with some --no-events option, as far as I know.
There's an interesting discussion about this (maybe-not-a-)bug here

Waiting for a sitemap generator inside the core of Ghost (planned as "future implementation") I decided to implement a way to generate an up-to-date sitemap.xml during deployment.
As you can read in the previous post I'm deploying this blog with Capistrano and capistrano-node-deploy.
So I added a deploy:generate_sitemap task which is executed at the end of the deployment process.

This is the Capfile extract:

namespace :deploy do  
  task :generate_sitemap do
    run "cd #{latest_release} && ./ #{latest_release}"
after "node:restart", "deploy:generate_sitemap"  

So at the end of the deployment the script is executed. The script is placed in the blog root and is a personalized version of the code you can find here:

It essentially does 3 things:

  • Puts the sitemap.xml link in the robots.txt file
  • Scans (using wget) the website and generates the sitemap.xml file in the content folder
  • Notifies Google Webmaster Tools

What I changed of the original script is:


user and group will be used to chmod the sitemap.xml file, so check that the web user (probably www-data) can read that file.

This process has a big problem: the sitemap is generated only during deploy, not when I publish a new post. A workaround is to run cap deploy:generate_sitemap after a new post is published.

It works but I need an automatic way. Any idea?

I just moved this blog from Jekyll to Ghost (v.0.4.2 while writing this post) and I had to find a fast way to deploy new changes to the server.
I'm pretty confident with Capistrano so, although Ghost doesn't use Ruby, I decided to use it to manage deployments.
A cool gem allow node apps to be deployed with Capistrano: capistrano-node-deploy

This is the Gemfile:

source ''  
gem 'capistrano', '~> 2.15.5'  
gem 'capistrano-node-deploy', '~> 1.2.14'  
gem 'capistrano-shared_file', '~> 0.1.3'  
gem 'capistrano-rbenv', '~> 1.0.5'  

If you don't use rbenv just remove the related line in the Gemfile and change the Capfile accordingly.

This configuration works well, but it has some problem if you use nvm instead of a system-wide installation of node and npm.

To fix them I had to add some variables (nvm_path, node_binary and npm_binary) and totally override the node:install_packages task. Whithout this changes the deploy task ends with messages like:

/usr/bin/env: node
No such file or directory  


node: not found  

This isn't really a good way, because you must change the nvm_path every time you upgrade nvm, but it's the only way I actually found :)

I also changed the app_command variable to launch node ~/apps/ instead of node ~/apps/ in the upstart script. The second command doesn't actually works although is the gem's default.

This is the full content of the Capfile (remember to change to your own):

require "capistrano/node-deploy"  
require "capistrano/shared_file"  
require "capistrano-rbenv"  
set :rbenv_ruby_version, "2.1.1"

set :application, ""  
set :user, "<USERNAME>"  
set :deploy_to, "/home/#{user}/apps/#{application}"

set :app_command, "index"

set :node_user, "<USERNAME>"  
set :node_env, "production"  
set :nvm_path, "/home/<USERNAME>/.nvm/v0.10.26/bin"  
set :node_binary, "#{nvm_path}/node"  
set :npm_binary, "#{nvm_path}/npm"

set :use_sudo, false  
set :scm, :git  
set :repository,  "<GIT REPO URL>"

default_run_options[:pty] = true  
set :ssh_options, { forward_agent: true }

server "<SERVER HOSTNAME OR IP>", :web, :app, :db, primary: true

set :shared_files,    ["config.js"]  
set :shared_children, ["content/data", "content/images"]

set :keep_releases, 3

namespace :deploy do  
  task :mkdir_shared do
    run "cd #{shared_path} && mkdir -p data images files"

  task :generate_sitemap do
    run "cd #{latest_release} && ./ #{latest_release}"

namespace :node do  
  desc "Check required packages and install if packages are not installed"
  task :install_packages do
    run "mkdir -p #{previous_release}/node_modules ; cp -r #{previous_release}/node_modules #{release_path}" if previous_release
    run "cd #{release_path} && PATH=#{nvm_path}:$PATH #{npm_binary} install --loglevel warn"

after "deploy:create_symlink", "deploy:mkdir_shared"  
after "node:restart", "deploy:generate_sitemap"  
after "deploy:generate_sitemap", "deploy:cleanup"  

Recentemente non sono riuscito ad aggiornare molto questo blog. Spesso la causa non è stata la mancanza di contenuti, ma la non immediatezza della piattaforma. È vero, con Jekyll mi sono divertito e l'idea di servire un sito statico secondo me è grandiosa, ma l'implementazione è effettivamente molto scarna e questo mi ha spesso fermato quando avrei voluto iniziare a scrivere un articolo e basta.

Da un po' di tempo mi sto quindi guardando intorno alla ricerca di una nuova piattaforma, nella convinzione che comunque non voglio usare Wordpress.

Per l'appunto nelle ultime settimane ho iniziato a riscrivere l'interfaccia di Rubyfatt utilizzando Ember e, quasi contemporaneamente, ho iniziato a sentir parlare di Ghost che ha appunto deciso di riscrivere l'interfaccia di amministrazione con Ember. Mi è subito sembrata un'occasione da cogliere al volo.

Sto lavorando sulla migrazione da Jekyll a Ghost, molto presto le pagine del blog potrebbero quindi cambiare aspetto per l'ennesima volta