If Ubuntu is running on the nodes of the cluster, I can do this in the shell script like this:
for i in `nodelist`; do
ssh $i /usr/sbin/groupadd hadoop
ssh $i /usr/sbin/useradd -g hadoop hadoop
ssh $i echo "hadoop" | passwd --stdin hadoop
for i in `nodelist`; do
ssh $i /usr/sbin/groupadd hadoop
ssh $i /usr/sbin/useradd -g hadoop hadoop
ssh $i echo "hadoop" | passwd --stdin hadoop
done
where nodelist is a text file and all the cluster nodes are listed there.
However, this does not work for Redhat Linux 6. You can not first do "ssh $i" to log on the remote machine and then do "echo 'hadoop' | passwd --stdin hadoop". What you can do is to create two script files. File A just covers adding user and changing password, script file B handles copying file A to each node on the cluster and run file A locally on each node.
For example, my file A looks like:
However, this does not work for Redhat Linux 6. You can not first do "ssh $i" to log on the remote machine and then do "echo 'hadoop' | passwd --stdin hadoop". What you can do is to create two script files. File A just covers adding user and changing password, script file B handles copying file A to each node on the cluster and run file A locally on each node.
For example, my file A looks like:
/usr/sbin/groupadd hadoop
/usr/sbin/useradd -g hadoop hadoop
echo "hadoop" | passwd --stdin hadoop
my file B looks like:
for i in `nodelist`; do
scp fileA $i:./
ssh $i ./fileA
done
Thus, I just need to create file A on the master node run fileB from there.
No comments:
Post a Comment