Test Manager Sample Question Set 1
Q-1: Which of the following is an example of testing as part
of the requirements specification phase of a project?
1. A requirements review meeting
2. A business analyst eliciting requirements
3. A database administrator designing a table
4. A test results report showing requirements coverage
Q-2: Assume you are a test manager working on a project to
create a programmable thermostat for home use to control central heating, ventilation, and air
conditioning (HVAC) systems. This project is following a sequential lifecycle model,
specifically the V-model. Currently, the system architects have released a first draft design specification
based on the approved requirements specification released previously.
Identify all of the following that are appropriate test
tasks to execute at this time:
1. Design tests from the requirements specification
2. Analyze design-related risks
3. Execute unit test cases
4. Write the test summary report
5. Design tests from the design specification
Q:-3 You are the manager of a bank's quality assessment
group, in charge of independent testing for banking applications. You are working on a project to
implement an integrated system that will use three off-the-shelf systems to manage a bank's
accounts-receivable system.
Identify all of the following that are test levels that you
would expect to directly manage:
1. Component testing for each system
2. Component integration testing for each system
3. System testing for each system
4. Contract acceptance testing for each system
5. System integration testing
Q-4: Which of the following is generally applicable to
demonstrating compliance to regulations for safety critical systems?
1. Test traceability
2. ISO 61508
3. Usability testing
4. Employee evaluations
Q-5: Which of the following is an accurate statement that
captures a difference between a trend chart showing the total number of bugs discovered and
resolved during test execution and a trend chart showing the total number of test cases passed
and failed during test execution?
1. The bug trend chart will reveal test progress problems
while the test case trend chart
will not.
2. The bug trend chart will show test coverage while the
test case trend chart will not.
3. The test case trend chart will reveal test progress
problems while the bug trend chart
will not.
4. The test case trend chart will show test coverage while
the bug trend chart will not.
Q-6: Considering the typical objectives of testing that are
identified in the Foundation syllabus, which of the following metrics can we use to measure the
effectiveness of the test process in achieving one of those objectives?
1. Average days from defect discovery to resolution
2. Lines of code written per developer per day
3. Percentage of test effort spent on regression testing
4. Percentage of requirements coverage
Q-7: Assume you are a test manager working on a project to
create a programmable thermostat for home use to control central heating, ventilation, and air
conditioning (HVAC) systems. This project is following a sequential lifecycle model,
specifically the V-model. Your test team is following a risk-based testing strategy augmented with a
reactive testing strategy during test execution to ensure that key risks missed during risk
analysis are caught.
Of the following statements, identify all that are true
about the test plan.
1. The test plan should include risk analysis early in the
process.
2. The test plan should list the requirements specification
as an input to the risk analysis.
3. The test plan should assign specific bugs to specific
developers.
4. The test plan should discuss the integration of reactive
test techniques into test
execution.
5. The test plan should specify overall project team metrics
used to determine bonuses
Q-8: Which of the following is a test document in which you
would you expect to find the preconditions to start executing a level of testing?
1. Test plan
2. Test design specification
3. Incident report
4. Project plan
Q-9: Which of the following is the most likely reason a user
might be included in test execution?
1. Their application domain knowledge
2. Their technical expertise
3. Their testing expertise
4. Their management expertise
Q-10: You are the manager of a bank's quality assessment
group, in charge of independent testing for banking applications. You have just concluded a project to
implement an integrated system that will use three off-the-shelf systems to manage a bank's
accounts-receivable system. During the project, you found that one of the vendor's systems, while
comprising approximately the same amount of functionality and of roughly the same complexity
as the other two systems, had significantly more defects. Making no assumptions other than
ones based on the information provided here, which of the following is a reasonable
improvement to the test process for subsequent projects involving this vendor?
1. Impose retroactive financial penalties on this vendor for
the number of bugs delivered
on this project.
2. Perform an acceptance test for all systems received, with
particular rigor for this vendor.
3. Cancel the contract with this vendor and put it on an
industry blacklist.
4. Require the vendor's developers to attend training to
improve their ability to write
quality code.
Q-11: Which of the following is a project risk mitigation
step that you might take as a test manager?
1. Testing for performance problems
2. Hiring a contractor after a key test analyst quits
3. Procuring extra test environments in case one fails
during testing
4. Performing a project retrospective using test results
Q-12: You are planning the testing for an integrated system
that will use three off-the-shelf components to manage a bank's accounts-receivable system.
You are conducting an informal quality risk analysis session with project and system stakeholders
to determine what test conditions should be tested and how much each test condition
should be tested. Which of the following is a quality risk item that you might identify in
this quality risk analysis session?
1. Failure of a component vendor to conduct adequate
component testing
2. Calculation of excessive late-payment penalties for
invoices
3. On-time payment of all invoices for international vendors
4. Calculation of risk priority using likelihood and impact
Q-13: During a formalized quality risk analysis session
following the Failure Mode and Effect Analysis technique, you are calculating risk priorities. Which of the
following are major factors in this calculation?
1. Severity and priority
2. Functionality, reliability, usability, efficiency,
maintainability, and portability
3. Loss of a key contributor on the test team
4. Loss of a key contributor on the development team
Q-14: Assume you are a test manager in charge of integration
testing, system testing, and acceptance testing for a bank. You are working on a project
to upgrade an existing automated teller machine system to allow customers to obtain cash
advances from supported credit cards. The system should allow cash advances from $20 to $500,
inclusively, for all supported credit cards. The supported credit cards are American Express,
Visa, Japan Credit Bank, Eurocard, and MasterCard.
Which of the following statements best associates a key
stakeholder with the kind of input that stakeholder can provide during a quality risk analysis?
1. A tester can provide input on the likelihood of a risk
item.
2. A developer can provide input on the impact of a risk
item.
3. A business analyst can provide input on the likelihood of
a risk item.
4. A help desk staffer can provide input on the impact of a
risk item.
Q-15: Which of the following is a situation in which you
would expect an iterative quality risk analysis to result in the largest number of new or changed risk items
and risk levels?
1. You perform a risk analysis on the final requirements
specification and subsequently
receive a draft design specification.
2. A tester leaves after test design is complete, and you
hire a new tester to replace her.
3. The development manager hires two additional programmers
after the quality risk
analysis is complete.
4. You perform a risk analysis on the final requirements
specification and then that
document is placed under formal configuration management.
Q-16: Assume you are a test manager working on a project to
create a programmable thermostat for home use to control central heating, ventilation, and air
conditioning (HVAC) systems. In addition to the normal HVAC control functions, the
thermostat also has the ability to download data to a browser-based application that runs on PCs for
further analysis. During quality risk analysis, you identify compatibility
problems between the browser-based application and the different PC configurations that can
host that application as a quality risk item with a high level of likelihood. You plan to perform
compatibility testing to address this risk.
Which of the following is a way in which you might monitor
the effect of testing on the reduction of this risk during test execution?
1. Reduce the number of supported PC configurations.
2. Assign more testers to cover compatibility than testers
to cover functionality.
3. Analyze the number of defects found that relate to this
risk item.
4. Plan to test the most common PC configurations.
Q-17: Which of the following is an example of a project
where failure mode and effect analysis would be a better choice for risk analysis?
1. It is the project team's first application of risk-based
testing.
2. The system under test is both complex and safety
critical.
3. The system under test is a financial system.
4. Minimizing the amount of documentation is a key concern.
Q-18: Assume you are a test manager in charge of integration
testing, system testing, and acceptance testing for a bank. You are working on
a project to upgrade an existing automated teller machine system to allow
customers to obtain cash advances from supported credit cards. The system should allow cash advances from $20 to $500,
inclusively, for all supported credit cards. The supported credit cards are American Express,
Visa, Japan Credit Bank, Eurocard, and MasterCard.
In the master test plan, the Features to be Tested section
lists the following:
1. All supported credit cards
2. Language localization
3. Valid and invalid advances
4. Usability
5. Response time
Relying only on the information given above, select the
features to be tested for which
sufficient information is available to proceed with test
design.
1. I
2. II
3. III
4. IV
5. V
Q-19: Continue with the scenario described in the previous
question. Which of the following topics would you need to address in detail in the master test plan?
1. A strategy for regression testing
2. A list of advance amount boundary values
3. A description of intercase dependencies
4. A logical collection of test cases
Q-20: Assume you are a test manager working on a project to
create a programmable thermostat for home use to control central heating, ventilation, and air
conditioning (HVAC) systems. In addition to the normal HVAC control functions, the
thermostat also has the ability to download data to a browser-based application that runs on PCs for
further analysis.
The company test strategy calls for each test case to be run
on all combinations of
configuration options. For this system, you identify the
following factors and, for each factor, the following options:
Supported
PC/thermostat connections: USB and Bluetooth
Supported operating
systems: Windows 2000, Windows XP, Windows Vista, Mac X, Linux
Supported browsers:
Internet Explorer, Firefox, Opera
Because there are 10 test cases that involve downloading
data, this would require running 300 test cases, each of which requires an hour to run.
With management approval, you decide to test five configurations,
covering each option but not all the possible pairs and triples of options.
Which of the following statements describes the best option
for documenting this deviation from the test strategy?
1. In the test design specifications, explain the alternate
approach planned for this project
and how to set up the test configurations.
2. In the test procedure specifications, explain which test
cases should be run against
which configurations.
3. In the master test plan, explain the alternate approach
planned for this project and why
this approach is sufficient.
4. In the test item transmittal report, explain the
alternate approach planned for this
project and which test items were tested against which
configuration.
Q-21: Continue with the scenario described in the previous
question. You are writing a master test plan to cover integration testing and system testing of the
programmable thermostat. Select all of the following statements that are true.
1. The approach section should describe how to test the integration
of the thermostat with
other parts of the HVAC system.
2. The schedule section should describe when integration
testing should start and when
system testing should start.
3. The environmental needs section should address who is
responsible for each level of
testing.
4. The test items section should describe the equipment
required for each level of testing.
5. The test deliverables section should describe results
reporting for each level of testing.
Q-22: 12. Continue with the scenario described in the
previous questions, where you are a test manager working on a project to create a programmable
thermostat for home use to control central heating, ventilation, and air conditioning (HVAC)
systems. One critical quality risk identified for this system is the possibility of damage to
the HVAC system caused by excessive cycling of the compressor (i.e., turning the unit on and off
repeatedly in short intervals). Which of the following is a reasonable way to use an IEEE 829 test
plan to direct appropriate testing for this risk?
1. Write a separate test plan for this level of testing.
2. List the feature that prevents excessive cycling as a
feature to be tested.
3. Detail all of the requirements of the programmable
thermostat in the introduction of the
test plan.
4. Include a fully functioning compressor as one of the test
items.
Q-23: Continue with the scenario from the previous question.
Historically, on seven past projects, the test team has
found approximately 12 bugs during system test for each person-month of development team
effort. Five developers are assigned to work on a new project that is scheduled to last six months.
Assume that the cumulative number of bugs found, as shown in your convergence chart, has
flattened at 351 defects. Based on this information only, which of the following
statements is most likely to be true?
1. You would expect to find exactly 20 more defects before
the end of system test.
2. You have omitted tests for at least one critical quality
risk category.
3. You needed a test team of at least three testers for
optimum testing.
4. You have found roughly the number of defects you would
expect to find during system
test.
Q-24: Continue with the scenario from the previous question.
Assume that this project is following an iterative
lifecycle, while the previous projects for which you have bug metrics followed a sequential lifecycle.
Assuming no other dissimilarities between this project and the previous projects exist, which of the
following might be a reason to question the accuracy of the predicted number of defects?
1. People factors
2. Material factors
3. Process factors
4. Quality factors
Q-25: You are a test manager in charge of system testing on
a project to update a cruise-control module for a new model of a car. The goal of the
cruise-control software update is to make the car more fuel efficient.
You have written a first release of the system test plan
based on the final requirements
specification. You receive an early draft of the design
specification. Identify all of the following statements that are true.
1. Do not update the system test plan until the final
version of the design specification is
available.
2. Produce a draft update of the system test plan based on
this version of the design
specification.
3. Check this version of the design specification for
inconsistencies with the requirements
specification.
4. Participate in the final review of the design
specification but not any preliminary reviews
of the design specification.
5. Review the quality risk analysis to see if the design
specification has identified additional
risk items.
Q-26: 16. Which of the following is the best example of a
technique for controlling test progress in terms of the residual level of quality risk?
1. Counting the number of defects found and the number of
defects resolved
2. Counting the number of test cases passed and the number
of test cases failed
3. Counting the number of requirements that work properly
and the number of
requirements with known defects
4. Counting the number of tested risk items without known
defects and the number of
tested risk items with known defects
Q-27: You are a test manager in charge of system testing on
a project to update a cruise-control module for a new model of a car. The goal of the
cruise-control software update is to make the car more fuel efficient.
Halfway through test execution, you find that the test
results do not conclusively determine whether fuel efficiency has improved. Identify all of the
following actions that you might direct the test analysts to take to help to resolve this problem.
1. Redesign the fuel efficiency tests.
2. Revise the quality risk analysis.
3. Modify the test environment to gather more detailed
actual results.
4. Check for consistency in tested fuel mixtures.
5. Report fuel efficiency as apparently unchanged.
Q-28: Assume you are a test manager in charge of integration
testing, system testing, and acceptance testing for a bank. You are working on a project
to upgrade an existing automated teller machine system to allow customers to obtain cash
advances from supported credit cards.
The system should allow cash advances from $20 to $500,
inclusively, for all supported credit cards. The supported credit cards are American Express,
Visa, Japan Credit Bank, Eurocard, and MasterCard.
During test execution, you find five defects, each reported
by a different tester, that involve the same problem with cash advances, with the only difference
between these reports being the credit card tested. Which of the following is an improvement
to the test process that you might suggest?
1. Revise all cash advance test cases to test with only one
credit card.
2. Review all reports filed subsequently and close any such
duplicate defect reports before
assignment to development.
3. Change the requirements to delete support for American
Express cards.
4. Have testers check for similar problems with other cards
and report their findings in
defect reports.
Q-29: You are the manager of a bank's quality assessment
group, in charge of independent testing for banking applications. You are working on a project to
implement an integrated system that will use three off-the-shelf systems to manage a bank's accounts-receivable
system. You are currently managing the execution of system integration
testing.
Consider the following bug open/closed or convergence chart.
Which of the following interpretations of this chart
provides a reason to not declare the system integration testing complete?
1. The bug find rate has not leveled off.
2. Developers aren't fixing bugs fast enough.
3. The complete set of tests has not yet been run.
4. A number of unresolved bugs remain in the backlog.
Q-30: Which of the following is an example of a cost of
internal failure?
1. Finding a bug during testing
2. Training developers in secure coding practices
3. Designing test cases
4. Fixing a customer-detected bug
Q-31: You are the manager of a bank's quality assessment
group, in charge of independent testing for banking applications. You are in charge of testing for a
project to implement an integrated system that uses three off-the-shelf components to manage a
bank's accounts-receivable system.
Which of the following is most likely to be a major business
motivation for testing this system?
1. Avoiding loss of life
2. Having confidence in correct customer billing
3. Finding as many bugs as possible before release
4. Gathering evidence to sue the component vendors
Q-32: Which of the following is a risk of outsourced testing
that might not apply to distributed testing?
1. Selection of an improper test partner
2. Communication problems created by time zone differences
3. Insufficient skills in some of the test team members
4. Inconsistent test processes across the testing locations
Q-33: You are the manager of a bank's quality assessment
group, in charge of independent testing for banking applications. You used quality risk
analysis to allocate effort and prioritize
your tests during test preparation. You are currently
executing the system integration test. You have finished running each test case once. You are not
certain how much time the project management team will allow for additional test execution
because regulatory changes might require the system to be activated ahead of schedule.
Based on the severity and priority of the bugs found by each
test case, you have calculated weighted failure for each test case. You also know the risk
priority number for each test case based on the quality risk item(s) the test case covers.
Considering both the weighted failure and the risk priority number, you have reprioritized your
test cases for the final days or weeks of testing.
Which of the following is a benefit this reprioritization
will provide?
1. If testing is curtailed, you will have expended the
minimum amount of effort possible.
2. If testing is extended, you will have time to run all of
your tests.
3. If testing is curtailed, you will have run the most
important tests.
4. If testing is extended, you will have covered all of the
requirements.
Q-34: Which of the following is a benefit of exploratory
testing and other reactive test strategies that would not apply to an analytical requirements-based test
strategy?
1. The ability to utilize a very experienced test team
2. The ability to accurately predict the residual risk prior
to delivery
3. The ability to prevent defects during requirements
analysis
4. The ability to test effectively without a complete test
basis
Q-35: Which of the following is a type of defect that you
can detect more easily in a review than by a dynamic test?
1. Regression
2. Maintainability
3. Performance
4. Reliability
Q-36: Which of the following is a type of review in which
you would expect to detect the greatest percentage of defects present in the item under review?
1. Informal
2. Static analysis
3. Simulation
4. Inspection
Q-37: Assume you are a test manager working on a project to
create a programmable thermostat for home use to control central heating, ventilation, and air
conditioning (HVAC) systems. In addition to the normal HVAC control functions, the
thermostat has the ability to download data to a browser-based application that runs on PCs for further
analysis. During quality risk analysis, you identify compatibility
problems between the browser-based application and the different PC configurations that can
host that application as a quality risk item with a high level of likelihood. Select all of the following actions that should occur in the
quality and test plans to best ensure that the project team will minimize this risk upon release?
1. Carefully select the list of supported configurations.
2. Review the list of supported configurations with project
stake—holders.
3. Support only a single browser based on technical
attributes.
4. Downgrade the likelihood of the risk based on available
test resources.
5. Test the supported PC configurations early in test
execution.
Q-38: Assume you are a test manager in charge of integration
testing, system testing, and acceptance testing for a bank. You are working on a project
to upgrade an existing automated teller machine system to allow customers to obtain cash
advances from supported credit cards.
You have received a requirements specification that states
that the system should allow cash advances from $20 to $500, inclusively, for all supported
credit cards. Assume that the bank is required contractually to support the following credit
cards: American Express, Visa, Japan Credit Bank, Eurocard, and MasterCard.
The bank has given you the responsibility of organizing a
review of the requirements
specification. Which of the following is a risk for this
review?
1. You do not know the supported credit cards.
2. You do not include the proper stakeholders in the review.
3. You do not know which test levels to address.
4. You do not receive the requirements specification.
Q-39: Assume you are a test manager working on a project to
create a programmable thermostat for home use to control central heating, ventilation, and air
conditioning (HVAC) systems. In addition to the normal HVAC control functions, the
thermostat has the ability to download data to a browser-based application that runs on PCs for further
analysis. During quality risk analysis, you identify compatibility
problems between the browser-based application and the different PC configurations that can
host that application as a quality risk item with a high level of likelihood.
Your test team is currently executing compatibility tests.
Consider the following excerpt from the failure description of a compatibility bug report:
1. Connect the thermostat to a Windows Vista PC.
2. Start the thermostat analysis application on the PC.
Application starts normally and
recognizes connected thermostat.
3. Attempt to download the data from the thermostat.
4. Data does not download.
5. Attempt to download the data three times. Data will not
down—load.
Based on this information alone, which of the following is a
problem that exists with this bug
report?
1. Lack of structured testing
2. Inadequate classification information
3. Insufficient isolation
4. Poorly documented steps to reproduce
Q-40: Continue with the previous scenario. Your test team is
still executingcompatibility tests. Consider the following excerpt from the failure description
of a compatibility bug report:
1. Install the thermostat analysis application on a Windows
XP PC.
2. Attempt to start the thermostat analysis application.
3. Thermostat analysis application does not start.
4. Reinstall the thermostat analysis application three
times. Thermo—stat analysis
application does not start after any reinstallation.
5. This test passed on the previous test release.
Based on this information alone, which of the following is
the most reasonable hypothesis about
this bug?
1. The bug might be a regression.
2. The bug might be intermittent.
3. The application didn't install on Windows XP PCs before.
4. The bug might be a duplicate.
Answers:
1. A
2. A, B, E
3. D, E
4. A
5. C
6. D
7. A, B, D
8. A
9. A
10. B
11. C
12. B
13. A
14. D
15. A
16. C
17. B
18. A, C
19. A
20. C
21. A, B, E
22. B
23. D
24. C
25. A, C, E
26. D
27. A, C, D
28. D
29. D
30. A
31. B
32. A
33. C
34. D
35. B
36. D
37. A, B, E
38. B
39. C
40. A