Expanding and Refining Manual Application Testing

Registered by Nicholas Skaggs

Discussing options for improving manual testing, metrics, and user feedback for those running the development version of ubuntu

Blueprint information

Status:
Started
Approver:
Jono Bacon
Priority:
Medium
Drafter:
None
Direction:
Approved
Assignee:
Nicholas Skaggs
Definition:
Approved
Series goal:
Accepted for quantal
Implementation:
Started
Milestone target:
milestone icon ubuntu-12.10
Started by
Nicholas Skaggs

Related branches

Sprints

Whiteboard

-[effenberg0x0] I think what I suggested at https://blueprints.launchpad.net/ubuntu/+spec/qa-q-qa-testing-cadence and also https://blueprints.launchpad.net/ubuntu/+spec/qa-q-community and https://blueprints.launchpad.net/ubuntu/+spec/qa-q-iso-testing-process covers improvement of testing and measuring results in a more effective way than what we have today. The current structure can be used in smarter ways (launchpad, new tags), new structure (repo T-1, updated/packages+changelogs page) can improve the efficiency of current testing. [Effenberg0x0 at Ubuntu.com]

AGENDA:
Review Precise Cycle
How much feedback/testing occurred?
13 calls for testing, 8 different developers/teams, > 100 people
10 testcase contributors, 8 contributors to checkbox-manual-app-testing
What was useful / what went well?
Opinions?
We don't want to burnout or ask too much of our testers
We do want to acknowledge and communicate there contributions
Tools
Checkbox
manual app testing was done in precise using it
Moztrap
currently doesn't fit our needs, doesn't offer api to interact with
ISOtracker
plans to modify to allow reporting of results and testcase management; see session
http://summit.ubuntu.com/uds-q/meeting/20462/qa-q-isotracker-testcases/
Daily Testing
Delivering new packages and testcases to users on the desktop, simply by running the release
appear via an indicator, ala update manager
offer to test a package
based on hardware?
based on existance of proposed repo?
based on ?
install package automagically from ppa, feed tests and start up app for testing via one-click acceptance :-)
Getting positive testing/feedback loop into the hands of users and developers
Apport
enhancements needed?
Manual trigger of Apport: https://wiki.canonical.com/PES/QA/Automation/Checkbox/Bug
Delivering changelogs in a more usable and noticable way
aka, what changed with today's updates?
Proposed Repository
how to best utilize?
Coordinated events
Milestone testing
Specific "calls for testing"
Testing Days
will use isotracker to record
will schedule with skaet
Other Ideas from whiteboard
 it would be amazing if developers had some way of indicating what should be tested / stress tested in a package.
[nskaggs] Could we cover this by using a specific testing repository (proposed?) that contains specific packages and versions with the changes (on webpage?)
We should also invest on making LiveTesting work, focus it on critical ones, like Ubiquity, Jockey, etc.
[nskaggs] Agreed, covering idea in the "testing" days idea above
We must offer some interface through which developers can request testing for their packages. [effenberg] Developers could add a testing-request.json to package. json indicates package name, package release, date, what was changed, links to changelog if unavailable in package, what should be tested, tips on what to stress test, contact info. LP can read json, it's something that can be integrated.

[nskaggs] what do you propose? Currently this is encouraged but in an ad-hoc manner
A special tag, identifying scheduled-test-results, should be used on LP.
[nskaggs] I am unsure what is meant here [effenberg] was an idea to report test-request results to developers, in a way it doesn't get lost in the flood of reports in LP.

Why not ask Ubuntu users to create a launchpad account when they download a Development Release, auto-add this user to a "Development Release Users" team, and present him with a webpage that points out he can join a testing team and contribute
In general how can we make running the development version more "testing" focused? In your face prompts, apps, etc.

Loose notes
Test case ordering could better ordered so that repetitive cases are grouped together
whoopsie session: http://summit.ubuntu.com/uds-q/meeting/20661/foundations-q-crash-database-2/

 * can we aggregate the changelogs out of update-manager into a single view, so that users don't have to click on each package in the list to get an idea of what's changed? apt-listchanges can be configured to send email.
 * link out to launchpad to show the diff between the two versions?
 * Possible errata releases alongside package updates?

How do you push fixes to -proposed?: https://wiki.ubuntu.com/StableReleaseUpdates#Procedure

== ACTIONS ==

(?)

Work Items

Work items:
[cgregan] provide information about apport-bug to nskaggs: TODO
[popey] follow up on apport refusing to file bugs for packages taken from the official unity ppa: TODO
[kate.stewart] add Testing Days to Release Interlock page and coordinate with nskaggs: DONE
[vorlon] discuss with mpt about adding a button to update-manager to let users see the list of all changes in one go instead of having to drill down into the list; useful to have available in the development release in particular: DONE
[nskaggs] Meet with SRU verification team to discuss resource contention: DONE
[nskaggs] Meet with stgraber to discuss option for translation of testcases (to other languages) to enable wider pool of testers. : DONE
[nskaggs] Talk to apport team to understand plans for apport (how it prompts, when it appears, etc) during the development release: DONE
[nskaggs] Contact AlanBell to ensure accessibility testcases exist and are well-represented for accessibility: DONE
[nskaggs] Meet with stgraber to discuss using isotracker for manual testing events: DONE
[kate.stewart] investigate options into making education available to folks who install CD or when apport comes up?: POSTPONED

Dependency tree

* Blueprints in grey have been implemented.