Clusterssh alternative for managing multiple SSH server [closed]

Posted on

Clusterssh alternative for managing multiple SSH server [closed] – A server stack is the collection of software that forms the operational infrastructure on a given machine. In a computing context, a stack is an ordered pile. A server stack is one type of solution stack — an ordered selection of software that makes it possible to complete a particular task. Like in this post about Clusterssh alternative for managing multiple SSH server [closed] was one problem in server stack that need for a solution. Below are some tips in manage your linux server when you find problem about linux, ssh, cluster, remote-access, .

Is there any alternative to Clusterssh, pssh etc, to manage multiple ssh based servers through one interface?

One weakness in Clusterssh is that my servers use key based authentication, with passhprase to login, and there is no way to login to servers using the private key.

Is there any alternative available which supports authentication with Private keys?

Take a look on Rundeck – http://rundeck.org/

  1. Fabric

    Define your tasks first:

    from fabric.api import *
    
    @parallel
    @hosts('192.168.3.118', '192.168.6.142')
    
    def hostname():
        run('hostname')
    

    Then executing via the fab command-line tool:

    $ fab -f /path/to/.py/file hostname
    [192.168.3.118] Executing task 'hostname'
    [192.168.6.142] Executing task 'hostname'
    [192.168.6.142] run: hostname
    [192.168.3.118] run: hostname
    [192.168.6.142] out: SVR040-6142
    
    [192.168.3.118] out: SVR040-3118.localdomain
    
    
    Done.
    
  2. Gnome Connection Manager
  3. PAC Manager

You can go whole hog and install a configuration management system like Puppet or Chef. You haven’t mentioned how many nodes you’re actually trying to manage, so this might be overkill, but, certainly, you can centrally control a lot of machines this way. If you’re small right now, but are growing, you may also want to set up, say, Chef, before you get that much bigger.

If you need to run ad hoc commands over a specific set of nodes, you can do something like knife ssh 'roles:webserver' 'hostname' (knife is the command line tool for chef) to run the hostname command for all nodes that have the webserver role.

I use expect scripts to automate the logins (especially because I have to pass through a jumb box and enter in a chroot and lots of passwords must be entered) and did some “tweaks” to the config of cssh.
So, I have this “main script” in my bin folder that given a “server name/alias” it takes me into the server that I want and where I want.

In the ~/.clusterssh/config I’ve set the “ssh” parameter to point to my script, also “ssh_args” must be set to some innocuous/fake arg, that’s because cssh has it’s default args list, if left empty actually the default list will end up being to the script.

So the script (in each window/terminal) will receive this args and 1 of the args given to the cssh, the script it recuperates from a file for the given server the credentials set and the steps that it must do in order to arrive where I want, then it calls the “expect code” with all that data.

~/.clusterssh/config

ssh=/home/user/bin/qs.sh
ssh_args=-a 

qs.sh

#!/bin/bash
export PATH=~/bin:$PATH
shift
case $1 in
q4|q5|q6|q7|q8|q9)
    essh user1@axt$1 
    ### essh it's some little bash script that does the things I said before and in the end it launches the expect 
    ;;
q1|q2|q3)
    essh axtr@axt$1
    ;;
*)
    echo "GOOH"
esac

so I usually call it with something like this

# cssh q4 q5 q6 q7

it’s working also with “cluster aliases”
having the cluster
“qAll q4 q5 q6 q7”
I can call with cssh qAll

Hopes it helps anyone else.

Leave a Reply

Your email address will not be published. Required fields are marked *