This guide will walk you through the steps to set up a local instance of Taskotron. The current version is 0.0.3.
A host (H) capable of running two Fedora 19 VMs (i386 or x86_64), a Master (M) and a Slave (S).
master (M): 30-40GB disk, 1-2 procs, 2-4GB RAM
slave (S): >=20GB disk, >=1 proc, 1-2GB RAM
For both VMs, do WebServer installation and accept all defaults for partitioning. We don’t need anything fancy for our purposes.
While your VMs are finishing up their installations, we can set up H. We’ll be working out of /srv for most of this tutorial and working with a tool called Ansible. First, install Ansible:
yum install ansible
Second, create the directory tree we’ll be working out of.
mkdir -p /srv/{keys,ansible/private/qa/certs/taskotron-local}
cd /srv/ansible/
Next, the ansible playbooks we’ll be using need to be cloned from bitbucket.
git clone https://bitbucket.org/fedoraqa/ansible-playbooks.git qa
Hopefully by now your VMs are done or almost done installing. Once they are, we need to set up ssh via keys on both machines. For this tutorial, we’ll keep all the keys we use in /srv/keys.
cd /srv/keys
ssh-keygen -f vm-user-key
ssh-keygen -f user_name.git
ssh-copy-id -i vm-user-key user@master-ip
ssh-copy-id -i vm-user-key user@slave-ip
ssh-copy-id -i vm-user-key root@master-ip
Later on we’ll also be needing two other non-user keys for gitolite and S authenticating to M - so we’ll go ahead and create those now.
cd /srv/ansible/private/qa/certs/taskotron-local
ssh-keygen -f taskgit-admin
ssh-keygen -f id_buildslave
Now that we’re done with the ssh keys we’ll be needing, it’s time to get our SSL certs. Log into M and install mod_ssl and copy the generated cert to H.
yum install mod_ssl
And copy the keys to H.
cd /srv/ansible/private/qa/certs/taskotron-local
scp -i /srv/keys/vm-user-key root@master-ip:/etc/pki/tls/certs/localhost.crt .
scp -i /srv/keys/vm-user-key root@master-ip:/etc/pki/tls/private/localhost.key .
Now that your host is set up and ready to go, we’re going to configure ansible to auto configure M and S. Example settings were included in the ‘ansible-playbooks’ repo we cloned to /srv/ansible/qa.
cd /srv/ansible/qa
cp doc/taskotron-example-settings.yml /srv/ansible/private/qa/taskotron-local.yml
Now that we have a template to work with, we’re going to edit some settings and create an inventory file for ansible. Edit taskotron-local.yml file with new keys and options (assuming you kept the normal file names).
Edit the 'sslcertfile' line to read: sslcertfile: localhost.crt
Edit the 'sslkeyfile' line to read: sslcertfile: localhost.key
change the 'username' to 'qaadmin'
change the hostname to be what you want the M hostname to be
change 'buildmaster' to be the ip of M
Optional Things to change:
buildbot_user and buildbot_pw (these are used for the web interface)
buildslave_pw (used for S to auth to M)
Once those edits are done, create a file named “local” in the /srv/ansible/qa and make it look like this (remove the “()” from the ips):
[taskotron-local]
ip-of-M
[taskotron-local-slaves]
ip-of-S buildslave_name=static-slave1
This file is the inventory file for ansible - ansible looks here to know which machines it’s allowed to connect to.
Now we’re ready to run the first playbook task on M.
ansible-playbook -i local --private-key=/srv/keys/vm-user-key -e 'envtype=taskotron-local target=ip-of-M' -u M-Username playbooks/taskotron-master.yml -t base
This command resets the root password and configures a second user for passwordless sudo. The created user is ‘qaadmin’ that you set in your /srv/ansible/private/qa/taskotron-local.yml file on H.
The next step is to restart the firewall on M (this step won’t always be needed).
systemctl restart firewalld.service
Once that’s complete, run the full playbook on M.
ansible-playbook -i local --private-key=/srv/keys/vm-user-key -e 'envtype=taskotron-local target=ip-of-master' -u qaadmin playbooks/taskotron-master.yml
The base task (previous step denoted by ‘-t base’ at the end of the command) created your qaadmin user, so now qaadmin will be the user we specify in ansible calls. Note the change from ‘-u <M username>’ to ‘-u qaadmin’ as well as the addition of –private-key arguments. This is the same key you created and moved with ssh-copy-id earlier.
systemctl restart firewalld.service
On H, open a browser and navigate to the M IP and you’ll see a landing page. Nothing is configured at this point - but the foundation for M is done.
On H, open your .ssh/config file and add:
Host <M ip>
IdentityFile /srv/ansible/private/qa/certs/taskotron-local/taskgit-admin
Somewhere outside of /srv (like your home directory), clone the gitolite admin repo from M.
git clone gitolite3@M-ip:gitolite-admin
Once in this directory, you’ll find ‘conf/’ and ‘keydir/’ directories. Copy your id_buildslave along with your git key.
cd gitolite-admin
cp /srv/ansible/private/qa/certs/taskotron-local/id_buildslave.pub keydir/
cp /srv/keys/<user_name>.git keydir/
The last step for configuring gitolite is to edit conf/gitolite.conf to look like this:
repo gitolite-admin
RW+ = taskgit-admin
RW+ = <username>
repo testing
RW+ = @all
repo rpmlint
RW+ = @all
Once those changes are made to the repo, add, commit and push back to M.
cd ..
git add .
git commit -m 'Updated keys and the admin user.'
git push origin master
Now we need a task for Taskotron to run. The FedoraQA bitbucket account has a task for rpmlint. We’re going to clone this, rename it to rpmlint and push that to M. You already set up the gitolite conf with the final section “repo rpmlint.” You should do this in the same area you cloned gitolite-admin (outside of /srv).
git clone https://bitbucket.org/fedoraqa/task-rpmlint.git
git remote add taskotron-local gitolite3@<M ip>:rpmlint
git push -u taskotron-local --all
Now the master is configured and ready to have slaves attached to it!
Just like with M, you’re going to run the ‘-t base’ task first as the slave user in order to run the initial setup.
cd /srv/ansible/qa
ansible-playbook -i local --private-key=/srv/keys/vm-user-key -e 'envtype=taskotron-local target=S-ip' -u S-username playbooks/taskotron-slave.yml -t base
SSH into S and edit /etc/hosts to see M from S
ip-of-M hostname-from-taskotron-local.yml
Now restart the firewall on S.
systemctl restart firewalld.service
Now the initial configuration of S is finished, run the rest of the playbook.
ansible-playbook -i local --private-key=/srv/keys/vm-user-key -e 'envtype=taskotron-local target=S-ip' -u qaadmin playbooks/taskotron-slave.yml
The last step before trying some builds is to do the initial clone from M to add M to the .ssh/known_hosts on S and restart the firewall.
sudo su - buildslave
git clone gitolite3@M-ip:rpmlint
and finally:
systemctl restart firewalld.service
Now S should be attached to M. Go back to the web interface for M.
Open http://<M ip>/taskmaster and login with the username and password you specified in taskotron-local.yml
Navigate to http://<M ip>/taskmaster/builders/statictasks.
In the rpmcheck field group, fill out the last three inputs.
name of check to run: rpmlint
envr of package to test: simple-xml-2.7.1-2.fc20 #just grab something off koji
arch of rpm to test: noarch
Hopefully the build ran and was reported as a success. [Note: Add section about needing to check the stdio results to see what actually happened.]
Now that you can manually trigger builds, it’s time to set up the trigger on M to listen to fedmsg for pushes.
SSH into M, edit /etc/hosts to reflect M’s hostname and start the trigger:
127.0.0.1 localhost.localdomain localhost hostname-from-taskotron-local.yml
And finally, start the trigger!
sudo su - qaadmin
cd /home/qaadmin/trigger
fedmsg-hub
Congrats, now you have a working dev instance of Taskotron! Now your master is listening for messages from Koji. When a build is pushed, your M will task S to run the rpmlint test on the recently pushed package.