With the upcoming Final release for Fedora 20, there has been a lot of work done validating the release. All three sets of tests, base, install and desktop, are required to pass before a release can happen. I don't have an exact count (yet), but there are roughly 100 test cases to run through for each Test Candidate (TC) and Release Candidate (RC). That's a lot of tests - and most of them require a fresh installation.
With that many test cases, it's hard to know what order to do tests if you want to be efficient. At best, you'll remember a couple tests that can be run in tandem (for instance, installing KDE and running KDE Desktop tests), and at worst you'll end up doing a fresh install for each test case (regardless of whether the testcase needs it or not). That got me thinking, being new at this and all, there should be some kind of efficient testing 'roadmap.' Since I couldn't find one, I decided to make one.
I thought it would be great to have a starting testcase - with a complete list of tests that can be run at the same time, or in a series. I imagine most testers who've been working QA for a couple years already have this at least semi-mapped out in their head when they start testing. However, for us new people who are still getting the hang of it (read: still familiarizing ourselves with all the test cases), it would be great to have a map. I find having a map or a list of related tasks speeds up familiarization with a process. Initially this will likely be just a list of tests you can do at once.
The goal of this roadmap is two-fold: easy adoption and efficient more manual testing. By grouping testcases together we can reduce the steepness of the learning curve and minimize the amount of time spent waiting on an install to finish.
I'm working on a wiki page for this process. While we're testing for Final now, there will have to be maps of what tests to do based on what phase of the release cycle we're currently in. This adds a level of complication, because each phase will need separate maps - since we don't test the same things for Alpha we do for Final. The starting point will be Alpha test maps and all additional tests for the other phases of release will add onto the Alpha test map. At the moment, I'm not sure how many test maps we'll end up with.
The format for test maps still needs to be fleshed out. My first thought was to just use a flowchart or a simple list. Lists seemed the simplest and most intuitive, so that's what I went with. This is a very simple proof of concept - but I can imagine a plethora of uses and integration points for an approach like this.
Below is the general direction I see this approach to testing going:
Wiki Page -> Web App -> TCMS Integration -> Giving Karma -> Getting badges
While the Wiki is a good place to start, it's hard to integrate with. The next progression would be a web app (or integration with the test day app) - which would handle all the record keeping or at least be able to spit out wiki-formatted results for the user to paste in.
I picture this web application allowing a user to identify themselves (FAS account), select the setup they have to use (i686/x86_64, preferred Desktop Environment SATA/PATA, etc.,) and the system can hand them a testing map to follow that fits their setup. This would make it a lot easier for new testers to get started as well as let veteran testers not have to think about which test is next.
Eventually, it would be nice to notify users of packages that have an update in Bodhi. If someone is testing a TC, have a list of packages included in the TC, and remind the user to run Fedora-gooey-karma and leave karma if there's an update waiting in Bodhi.
At almost any point in during the duration of this project, it would be great to add support for Fedora Badges. I think there could be a couple badges specific to this project - but it would also aid in granting badges for testing in general. Right now there isn't an easy means of giving badges based on Release Validation testing (as I understand it, could be wrong).
Approaching testing this way has a couple implications from a user perspective - but the technical implementation is pretty straightforward. For new testers, it's a single point of entry that has all the information of the current test matrices. For testcase creators, they now have to think about where a new testcase could fit in the list of test maps.
With the newly minted Working Groups (WG) for Fedora 21, this could provide a decent mechanism for handing test maps (and all the associated test cases) over to whoever is doing the testing for that WG. Test maps for each WG makes it easy for QA to manage and customize. It could even be a good source of metrics for tracking testing efforts throughout the Fedora Ecosystem.
I realize that I very likely have no idea how big of a project this could turn into. I think the phases I've described so far are fairly manageable chunks. Besides, at any point from here on - the community can decide this is a bad idea and we stop working on it.