I'm using plain capistrano to deploy a simple cluster written in node.
I defined roles as follows:
role :boss, "bosshost"
role :worker, { get_worker_hosts }
I'm using the capistrano's default deploy dance + own tasks to put worker app on the servers.
The problem is I don't want any of that for the boss since it's only a single script. Ideally this would do:
namespace :boss
task :update, :roles => [:boss]
upload 'boss.js', "#{boss_home}/boss.js"
end
task :restart, :roles => [:boss]
run "forever restart #{boss_home}/boss.js"
end
end
I used :roles => [:worker] in all worker related tasks that happen after deploy:finalize_update. However, running $ cap deploy will still put uneccessary stuff on the boss server.
How do I tell capistrano that deploy task and following default tasks should be ran only for servers with worker role?
I was able to figure this out for myself as well. In my particular case, I needed to deploy the application to only one server since the app cluster was using an NFS mount, but I needed to restart a node daemon on all of the servers in my app cluster after deploy was completed.
Here was my solution, which is to set up roles for each server (you can use role too):
server 'a-server.com', :app, :web, :service
server 'another-server.com', :service, :no_release => true
The key is :no_release => true so that code is not deployed to this server. Built-in tasks will follow this guideline, and I also had a couple custom tasks. I solved the custom tasks by looking at deploy output of the tasks that were running in parallel, and adding the following:
task "my_task", :except => { :no_release => true } do
# do stuff here...
# Example of using a role with `run`
run "sudo /etc/init.d/nodejs-#{application} restart", :roles => :service
end
Hope that helps!
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With