Automated test project for HABA HABA.
The release of this software is being released to bring the HABA Online application on to the SIT and UAT Testing will cover the functional testing of the Shopping Cart Functionality for this release is detailed in the Requirements specifications documents. Installation will be tested on the different platforms as described in the Requirements Specification. The testing for this will cover the installation on these platforms, as well as a set of critical functions to determine that the code will work on all platforms.
The testing approach for this release shall be done in a fashion that will accommodate the current functionality in products being developed for Shopping Cart on SIT and UAT Testing will be designed to encompass the following.
Testing will cover functionality testing for changes through the use of the test interface. This will validate base functions of the new code as it relates to the standard model of presentation for data and user entered data.
The test cases needs to be prove in the following browsers:
*Chrome
*Firefox
The objective of system testing is to verify the correctness of the newly designed items, and their interaction with the existing functions. Testing will focus on functionality of the Shopping Cart Testing will be accomplished through an organized testing process that will have repeatable tests. This process will be accomplished by use of the scripts created and designed to match the requirements being developed for the Testing Team. Planning the execution of test scripts for new functionality and regression tests will be done in coordination with the plan. Testing and development will be executed in parallel, based on phased implementations, wherever possible. Test scripts will be structured to give a full range of coverage to the converted functions in both a Positive and Negative fashion, simulating what a potentially unfamiliar user might do during use. Positive test cases will reflect that the application functions as expected and described in the Requirements Specification and the Project Plan. Negative test cases are tests that exercise the limits and boundaries outside the expected designs. The results of this testing will give us some idea as to the stability for the application and its components. Additional testing beyond the scripted test may be done where feasible to exercise the application to verify error handling and system recovery due to incorrect data or entry into fields.
*Manual Testing
*Automation Testing - Selenium
*Defect tracking Tool -Jira
Test Cases: *Customer can add any available item to cart *Customer can update quantity of item in cart *Customer can remove item from cart *Customer can jump to item details *Customer can start checkout process *Customer can checkout via PayPal
Deliverables and Milestones are as follows; Test scripts complete, signed off by the team managers. Data sets to be used for testing, (spread sheets and tables with client data designed to reflect the actual use by an client) Milestones will be decided after release of the final project timeline.
High risk must be assumed concerning the completion of new functions and the related testing within the definedtime frame.
Test Lead Experienced Software Tester Product Manager