Q. 1: Test Implementation
and execution has which of the following major tasks?
i. Developing and prioritizing test
cases, creating test
data,
writing test procedures and
optionally preparing the test harnesses and writing automated test scripts.
ii. Creating the test suite from the test cases for efficient
test
execution.
iii. Verifying that the test environment has been set up
correctly.
iv.
Determining the exit criteria.
A. i,ii,iii are true and iv is
false
B.
i,,iv are true and
ii is false
C. i,ii are true and
iii,iv
are
false
D. ii,iii,iv are true and
i is false
Q.
2:
One of the fields on a form contains a text box which accepts numeric values in
the
range of 18 to 25.
Identify the invalid Equivalance class
A. 17
B. 19
C. 24
D.
21
Q. 3: Exhaustive Testing is
A. Is impractical but possible
B. Is practically possible
C. Is impractical and impossible
D. Is always possible
Q. 4: Hand over of Testware is a part of which Phase
A. Test Analysis and Design
B. Test Planning and control
C.
Test
Closure Activities
D. Evaluating exit criteria and reporting
Q. 5: Which one is not comes under international standard
A. IEC
B. IEEE
C.
ISO
D. All of the above
Q. 6: In which phase static tests are used
A. Requirements
B. Design
C. Coding
D. All of the above
Q. 7: What's the disadvantage of Black Box Testing
A.
Chances of having repetition of tests that are already done by programmer.
B. The test inputs needs to be from large sample space.
C. It is difficult
to
identify all possible inputs in limited testing time.
So writing test cases is
slow and difficult
D. All above
Q. 8: Static analysis tools are typically used by
A. Testers
B. Developers
C. Testers & Developers
D. None
Q.
9: Majority of system errors occur in the phase
A. Requirements Phase.
B. Analysis and Design Phase
C. Development
Phase
D. Testing Phase
Q.
10: The specification which describes steps required to operate the system and exercise test cases in order to implement
the
associated test design
A. Test Case Specification
B. Test Design Specification
C. Test Procedure Specification
D. None
Q.
11: How much percentage of the life cycle costs of a software are spent
on maintenance.
A. 10%
B. 30%
C. 50%
D. 70%
Q.
12: When a defect
is detected
and
fixed then the software should be retested to
confirm that the original defect has been successfully removed.
This is called
A. Regression testing
B. Maintenance testing
C. Confirmation testing
D. None of the above
Q. 13: Equivalence testing divides the input domain into classes of data from which
test
cases can be derived to reduce the total number of test
cases that must be
developed.
A. True
B.
False
Q. 14: When to stop Testing?
A. Stop
when scheduled time for testing expires
B. Stop
if
75% of the pre-defined number of errors is detected.
C. Stop when all the test cases execute with detecting few errors.
D. None above
Q.
15: With thorough testing it is possible to remove all defects from a program prior
to delivery to the customer.
A. True
B.
False
Q. 16: Structure is unknown for which type of development
project
A. Traditional system development
B. Iterative development
C. System maintenance
D. Purchased/contracted software
Q. 17: indicates how important it is to fix the bug and when it should be
fixed
A. Severity
B. Priority
C. All of the above
D. None of the above
Q.
18: The person who leads the review of the document(s), planning the
review,running the meeting and follow-up after the meeting
A. Reviewer
B. Author
C. Moderator
D. Auditor
Q. 19: Performs sufficient testing to evaluate every possible path and condition in the application system. The only test
method that guarantees the proper functioning of the
application system is called
as
A. Regression Testing
B. Exhaustive Testing
C. Basic Path Testing
D. Branch Testing
Q. 20: Quality Assurance is the process by which product quality is compared
with the
application standards and the action taken when nonconformance is detected.
A. True
B.
False
Q. 21: A
formal assessment of a work product conducted by
one or more qualified independent reviewer to detect
defects.
A. Inspection.
B. Walkthrough.
C. Review
D. Non Conformance
Q.
22: Test
Case are grouped into Manageable (and scheduled) units are called as
A. Test Harness
B. Test Suite
C. Test Cycle
D. Test Driver
Q. 23: Configuration and compatibility testing are typically good choices for outsourcing
A. True
B.
False
Q. 24: What type of tools to be used for Regression Testing
A. Performance
B. Record/Playback
C. A. & B.
D. None
Q. 25: System Integration testing should be done after
A. Integration testing
B. System testing
C.
Unit testing
D.
Component integration testing
Q. 26: During this event the entire system is tested to verify that all functional information structural and quality requirements have been met. A predetermined combination of tests is designed that when executed
successfully satisfy management
that the system meets specifications
A. Validation Testing
B. Integration Testing
C. User Acceptance Testing
D. System Testing
Q. 27: What is the normal order of activities in which software testing is organized?
A. Unit, integration, system, validation
B. System, integration, unit, validation
C. Unit, integration, validation, system
D. None of the above
Q.
28:
The goal of a
software tester is to find bugs, find them as early as possible and
make
sure they get fixed.
A. True
B. False
Q.
29: Beta testing is performed at developing organization's site where as Alpha testing is performed by people at their own locations.
A. True
B.
False
Q. 30: The principal attributes of tools and automation are
A. Speed & efficiency
B. Accuracy & precision
C. All of the above
D. None of the above
Q. 31: In
testing doesn't
know anything about the sofware being tested; it just clicks or types randomly.
A. Random testing
B. Gorilla testing
C.
Adhoc testing
D. Dumb
monkey testing
Q.
32: A series of probing questions about the completeness and attributes of an application system is called
A. Checklist
B. Checkpoint review
C. Decision table
D. Decision tree
Q.
33:
The testing technique that requires devising test
cases to demonstrate that each program function is operational is called
A. Black-box testing
B. Glass-box testing
C. Grey-box testing
D. White-box testing
Q. 34: A white box testing technique that measures the number of or percentage of decision directions executed by the test case designed is called
A. Condition coverage
B. Decision/Condition coverage
C. Decision Coverage
D. Branch coverage
Q.
35:
Which summarizes the testing activities associated with one or more test
design specifications.
A. Test Summary
report
B. Test Log
C.
Test
Incident Report
D. Test Script
Q. 36: Testing without a real plan and test cases is called ---
A. Gorilla testing
B. Monkey testing
C.
Adhoc testing
D. All of the above
Q. 37: Which rule should not be followed for reviews
A. Defects and
issues are identified and corrected
B. The product is reviewed not the producer
C. All members of the reviewing team are responsible for the result of the review
D. Each review has a clear predefined objective
Q. 38: Verification can be termed as 'Are we building the product right?"
A. True
B.
False
Q.
39:
Which testing is used to verify that the system can perform properly when internal program or system limitations have been exceeded
A. Stress Testing
B. Load
Testing
C.
Performance Testing
D. Volume testing
Q. 40: Defects are recorded into three major purposes. They are:
1.To correct the defect
2.To report status of the application
3.To improve the software development process
A. True
B.
False
Answers:
Q.1-A
Q.2-A
Q.3-A
Q.4-C
Q.5-B
Q.6-D
Q.7-D
Q.8-B
Q.9-A
Q.10-C
Q.11-D
Q.12-C
Q.13-A
Q.14-A
Q.15-B
Q.16-D
Q.17-C
Q.18-C
Q.19-C
Q.20-A
Q.21-A
Q.22-B
Q.23-A
Q.24-B
Q.25-C
Q.26-C
Q.27-A
Q.28-A
Q.29-B
Q.30-C
Q.31-D
Q.32-A
Q.33-C
Q.34-B
Q.35-C
Q.36-D
Q.37-C
Q.38-A
Q.39-A
Q.40-A
Q.2-A
Q.3-A
Q.4-C
Q.5-B
Q.6-D
Q.7-D
Q.8-B
Q.9-A
Q.10-C
Q.11-D
Q.12-C
Q.13-A
Q.14-A
Q.15-B
Q.16-D
Q.17-C
Q.18-C
Q.19-C
Q.20-A
Q.21-A
Q.22-B
Q.23-A
Q.24-B
Q.25-C
Q.26-C
Q.27-A
Q.28-A
Q.29-B
Q.30-C
Q.31-D
Q.32-A
Q.33-C
Q.34-B
Q.35-C
Q.36-D
Q.37-C
Q.38-A
Q.39-A
Q.40-A
I guess Answer for Q.17 is Priority.
ReplyDeleteNot All of the Above. Please correct me if i am wrong.
I agree with you, 17 is B Priority
DeleteI guess Answer for Q.25 is B(System Testing). Please correct me if i am wrong.
ReplyDeleteI just wanted to agree with you, I also think that it should be B (System testing). Dear admin, please correct us if we're wrong.
DeleteAgree totally, please see syllabus
DeleteI guess Answer for Q.26 is D(System Testing). Please correct me if i am wrong.
ReplyDeleteHi, could anyone please elaborate more on "when to stop testing?". As for now all previous exams are consistent in saying that we should stop testing when scheduled time for testing expires (Q.14). I have an opinion that we should stop testing when risk associated with releasing a product is on a acceptable level.
ReplyDeleteUnless it is a cynical approach that is presented here:
"time is up, here are the results, we did not check these key features of this product so the risk is high, but it's up to you to decide dear stakeholder if you want this product released or not".
Q.31 seems to be wrong, D should be the answer. According to ISTQB syllabus:
ReplyDeletemonkey testing: Testing by means of a
random selection from a large range of
inputs and by randomly pushing buttons,
ignorant on how the product is being used
why Q35 is Incident report?
ReplyDeletedear author,
ReplyDeletecould you please review Questions 8,17,25,35 for if there are any typo's exists.
could you please provide explanations for questions 14, 26, 35?
Q26 --> confused whether C or D , since there is no discussion on user so i guess D might be most suitable,
Q35--> might be A, since there is no information about incident from question.
thanks