Software Testing Glossary 测试词汇表.DOC

来源:互联网 发布:淘宝客服好干吗 编辑:程序博客网 时间:2024/05/21 02:35

A
Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。A

Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。A

Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。A

Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。A

Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。A

Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。A

Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。A

Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。A

Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。A

Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。A

Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。A

Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。A

Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。A

Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。A

Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。A

Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。A

Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。A

Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。A

Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。A

Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。A

Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。A

Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。A

Acceptance Testing: Testing conducted to enable auser/customer to determine whether to accept a software product. Normallyperformed to validate the software meets a set of agreed acceptancecriteria. 
Accessibility Testing: Verifying a product is accessible tothe people having disabilities (deaf, blind, mentally disabled etc.). 
Ad Hoc Testing: A testing phase where the testertries to 'break' the system by randomly trying the system's functionality. Caninclude negative testing as well. See also Monkey Testing
Agile Testing: Testing practice for projects usingagile methodologies, treating development as the customer of testing andemphasizing a test-first design paradigm. See also Test DrivenDevelopment
Application BinaryInterface (ABI): Aspecification defining requirements for portability of applications in binaryforms across defferent system platforms and environments. 
Application ProgrammingInterface (API): A formalizedset of software calls and routines that can be referenced by an applicationprogram in order to access supporting system or network services. 
Automated SoftwareQuality (ASQ): The use ofsoftware tools, such as automated testing tools, to improve softwarequality. 
Automated Testing:


Testing employing software tools which execute tests without manualintervention. Can be applied in GUI, performance, API, etc. testing.

·                The use of software tocontrol the execution of tests, the comparison of actual outcomes to predictedoutcomes, the setting up of test preconditions, and other test control and testreporting functions.

B
Backus-Naur Form: A metalanguage used to formallydescribe the syntax of a language. 
Basic Block: A sequence of one or moreconsecutive, executable statements containing no branches. 
Basis Path Testing: A white box test case designtechnique that uses the algorithmic flow of the program to design tests. 
Basis Set: The set of tests derived using basis path testing
Baseline: The point at which some deliverableproduced during the software engineering process is put under formal changecontrol. 
Beta Testing: Testing of a rerelease of a softwareproduct conducted by customers. 
Binary PortabilityTesting: Testing anexecutable application for portability across system platforms andenvironments, usually for conformation to an ABI specification. 
Black Box Testing: Testing based on an analysis of thespecification of a piece of software without reference to its internal workings.The goal is to test how well the component conforms to the publishedrequirements for the component. 
Bottom Up Testing: An approach to integration testingwhere the lowest level components are tested first, then used to facilitate thetesting of higher level components. The process is repeated until the componentat the top of the hierarchy is tested. 
Boundary Testing: Test which focus on the boundary orlimit conditions of the software being tested. (Some of these tests are stresstests). 
Bug: A fault in a program which causes theprogram to perform in an unintended or unanticipated manner. 
Boundary Value Analysis: BVA is similar to EquivalencePartitioning but focuses on "corner cases" or values that are usuallyout of range as defined by the specification. his means that if a functionexpects all values in range of negative 100 to positive 1000, test inputs wouldinclude negative 101 and positive 1001. 
Branch Testing: Testing in which all branches in theprogram source code are tested at least once. 
Breadth Testing: A test suite that exercises the fullfunctionality of a product but does not test features in detail.

 

C
CAST: Computer Aided SoftwareTesting. 
Capture/Replay Tool: A test tool that records test inputas it is sent to the software under test. The input cases stored can then beused to reproduce the test at a later time. Most commonly applied to GUI testtools. 
CMM: The Capability Maturity Model forSoftware (CMM or SW-CMM) is a model for judging the maturity of the softwareprocesses of an organization and for identifying the key practices that arerequired to increase the maturity of these processes. 
Cause Effect Graph: A graphical representation of inputsand the associated outputs effects which can be used to design testcases. 
Code Complete: Phase of development wherefunctionality is implemented in entirety; bug fixes are all that are left. Allfunctions found in the Functional Specifications have been implemented. 
Code Coverage: An analysis method that determineswhich parts of the software have been executed (covered) by the test case suiteand which parts have not been executed and therefore may require additionalattention. 
Code Inspection: A formal testing technique where theprogrammer reviews source code with a group who ask questions analyzing theprogram logic, analyzing the code with respect to a checklist of historicallycommon programming errors, and analyzing its compliance with codingstandards. 
Code Walkthrough: A formal testing technique wheresource code is traced by a group with a small set of test cases, while thestate of program variables is manually monitored, to analyze the programmer'slogic and assumptions. 
Coding: The generation of source code. 
Compatibility Testing: Testing whether software is compatiblewith other elements of a system with which it should operate, e.g. browsers,Operating Systems, or hardware. 
Component: A minimal software item for which aseparate specification is available. 
Component Testing: See Unit Testing
Concurrency Testing: Multi-user testing geared towardsdetermining the effects of accessing the same application code, module ordatabase records. Identifies and measures the level of locking, deadlocking anduse of single-threaded code and locking semaphores. 
Conformance Testing: The process of testing that animplementation conforms to the specification on which it is based. Usuallyapplied to testing conformance to a formal standard. 
Context Driven Testing: The context-driven school of softwaretesting is flavor of Agile Testing that advocates continuous and creativeevaluation of testing opportunities in light of the potential informationrevealed and the value of that information to the organization right now. 
Conversion Testing: Testing of programs or proceduresused to convert data from existing systems for use in replacementsystems. 
Cyclomatic Complexity: A measure of the logical complexityof an algorithm, used in white-box testing. 
D
Data Dictionary: A database that contains definitionsof all data items defined during analysis. 
Data Flow Diagram: A modeling notation that represents afunctional decomposition of a system. 
Data Driven Testing: Testing in which the action of a testcase is parameterized by externally defined data values, maintained as a fileor spreadsheet. A common technique in Automated Testing
Debugging: The process of finding and removingthe causes of software failures. 
Defect: Nonconformance to requirements orfunctional / program specification 
Dependency Testing: Examines an application'srequirements for pre-existing software, initial states and configuration inorder to maintain proper functionality. 
Depth Testing: A test that exercises a feature of aproduct in full detail. 
Dynamic Testing: Testing software through executingit. See also Static Testing
E
Emulator: A device, computer program, or systemthat accepts the same inputs and produces the same outputs as a givensystem. 
Endurance Testing: Checks for memory leaks or otherproblems that may occur with prolonged execution. 
End-to-End testing: Testing a complete applicationenvironment in a situation that mimics real-world use, such as interacting witha database, using network communications, or interacting with other hardware,applications, or systems if appropriate. 
Equivalence Class: A portion of a component's input oroutput domains for which the component's behaviour is assumed to be the samefrom the component's specification. 
Equivalence Partitioning: A test case design technique for acomponent in which test cases are designed to execute representatives fromequivalence classes. 
Exhaustive Testing: Testing which covers all combinationsof input values and preconditions for an element of the software under test.
F
Functional Decomposition: A technique used during planning,analysis and design; creates a functional hierarchy for the software.
Functional Specification: A document that describes in detailthe characteristics of the product with regard to its intended features.
Functional Testing: See also Black Box Testing.

Testing the features and operational behavior of a product to ensure theycorrespond to its specifications.

·                Testing that ignores theinternal mechanism of a system or component and focuses solely on the outputsgenerated in response to selected inputs and execution conditions.

G
Glass Box Testing: A synonym for White Box Testing.
Gorilla Testing: Testing one particularmodule,functionality heavily.
Gray Box Testing: A combination of Black Box and White Box testingmethodologies: testing a piece of software against its specification but usingsome knowledge of its internal workings.

 

H
High Order Tests: Black-box tests conducted once the software has beenintegrated. 


I
Independent Test Group (ITG): A group of people whose primary responsibility is softwaretesting, 
Inspection: A group review quality improvement process for writtenmaterial. It consists of two aspects; product (document itself) improvement andprocess improvement (of both document production and inspection). 
Integration Testing: Testing of combined parts of an application to determineif they function together correctly. Usually performed after unit andfunctional testing. This type of testing is especially relevant toclient/server and distributed systems. 
Installation Testing: Confirms that the application under test recovers fromexpected or unexpected events without loss of data or functionality. Events caninclude shortage of disk space, unexpected loss of communication, or power outconditions. 
J

K


L
Load Testing: See Performance Testing. 
Localization Testing: This term refers to making software specifically designedfor a specific locality. 
Loop Testing: A white box testing technique that exercises programloops. 
M

Metric: A standard of measurement. Software metrics are thestatistics describing the structure or content of a program. A metric should bea real objective measurement of something such as number of bugs per lines ofcode. 
Monkey Testing: Testing a system or an Application on the fly, i.e justfew tests here and there to ensure the system or an application does not crashout. 
N

Negative Testing: Testing aimed at showing software does not work. Alsoknown as "test to fail". See also Positive Testing. 
O

P

Path Testing: Testing in which all paths in the program source code aretested at least once. 
Performance Testing: Testing conducted to evaluate the compliance of a systemor component with specified performance requirements. Often this is performedusing an automated test tool to simulate large number of users. Also know as"Load Testing". 
Positive Testing: Testing aimed at showing software works. Also known as"test to pass". See also Negative Testing. 
Q

Quality Assurance: All those planned or systematic actions necessary toprovide adequate confidence that a product or service is of the type andquality needed and expected by the customer. 
Quality Audit: A systematic and independent examination to determinewhether quality activities and related results comply with planned arrangementsand whether these arrangements are implemented effectively and are suitable toachieve objectives. 
Quality Circle: A group of individuals with related interests that meet atregular intervals to consider problems or other matters related to the qualityof outputs of a process and to the correction of problems or to the improvementof quality. 
Quality Control: The operational techniques and the activities used tofulfill and verify requirements of quality. 
Quality Management: That aspect of the overall management function thatdetermines and implements the quality policy. 
Quality Policy: The overall intentions and direction of an organization asregards quality as formally expressed by top management. 
Quality System: The organizational structure, responsibilities,procedures, processes, and resources for implementing quality management. 
R
Race Condition: A cause of concurrency problems. Multiple accesses to ashared resource, at least one of which is a write, with no mechanism used byeither to moderate simultaneous access. 

Ramp Testing: Continuously raising an input signal until the systembreaks down. 

Recovery Testing: Confirms that the program recovers from expected orunexpected events without loss of data or functionality. Events can includeshortage of disk space, unexpected loss of communication, or power outconditions. 

Regression Testing: Retesting a previously tested program followingmodification to ensure that faults have not been introduced or uncovered as aresult of the changes made. 

Release Candidate: A pre-release version, which contains the desiredfunctionality of the final version, but which needs to be tested for bugs(which ideally should be removed before the final version is released).

 

S
<>SanityTesting: Brief test of major functional elements of a piece ofsoftware to determine if its basically operational. See alsoSmoke Testing. 
<>ScalabilityTesting: Performance testing focused on ensuring the applicationunder test gracefully handles increases in work load. 
<>SecurityTesting: Testing which confirms that the program can restrictaccess to authorized personnel and that the authorized personnel can access thefunctions available to their security level. 


<>Smoke Testing: A quick-and-dirty test that the major functions of a pieceof software work. Originated in the hardware testing practice of turning on anew piece of hardware for the first time and considering it a success if itdoes not catch on fire. 

<>Soak Testing: Running a system at high load for a prolonged period oftime. For example, running several times more transactions in an entire day (ornight) than would be expected in a busy day, to identify and performanceproblems that appear after a large number of transactions have been executed. 

<>SoftwareRequirements Specification: A deliverable that describes all data, functional andbehavioral requirements, all constraints, and all validation requirements forsoftware/ 

<>SoftwareTesting: A set of activities conducted with the intent of findingerrors in software. 
<>StaticAnalysis: Analysis of a program carried out without executing theprogram. 
Static Analyzer: A tool that carries out static analysis. 
<>StaticTesting: Analysis of a program carried out without executing theprogram. 

Storage Testing: Testing that verifies the program under test stores datafiles in the correct directories and that it reserves sufficient space toprevent unexpected termination resulting from lack of space. This is externalstorage as opposed to internal storage. 
Stress Testing: Testing conducted to evaluate a system or component at orbeyond the limits of its specified requirements to determine the load underwhich it fails and how. Often this is performance testing using a very high level of simulated load. 
Structural Testing: Testing based on an analysis of internal workings andstructure of a piece of software. See also White Box Testing. 
System Testing: Testing that attempts to discover defects that areproperties of the entire system rather than of its individual components.

 

Testability: The degree to which a system or component facilitates theestablishment of test criteria and the performance of tests to determinewhether those criteria have been met. 
Testing:

The process of exercising software to verify that it satisfies specifiedrequirements and to detect errors. The process of analyzing a software item todetect the differences between existing and required conditions (that is,bugs), and to evaluate the features of the software item (Ref. IEEE Std 829).

The process of operating a system or component under specified conditions,observing or recording the results, and making an evaluation of some aspect ofthe system or component.

Test Automation: See Automated Testing
Test Bed: An execution environment configuredfor testing. May consist of specific hardware, OS, network topology,configuration of the product under test, other application or system software,etc. The Test Plan for a project should enumerated the test beds(s) to beused. 
Test Case:

Test Case is a commonly used term for a specific test. This is usually thesmallest unit of testing. A Test Case will consist of information such asrequirements testing, test steps, verification steps, prerequisites, outputs,test environment, etc.

·                A set of inputs,execution preconditions, and expected outcomes developed for a particularobjective, such as to exercise a particular program path or to verifycompliance with a specific requirement.

Test DrivenDevelopment: Testingmethodology associated with Agile Programming in which every chunk of code iscovered by unit tests, which must all pass all the time, in an effort toeliminate unit-level and regression bugs during development. Practitioners ofTDD write a lot of tests, i.e. an equal number of lines of test code to thesize of the production code. 
Test Driver: A program or test tool used toexecute a tests. Also known as a Test Harness. 
Test Environment: The hardware and software environmentin which tests will be run, and any other software with which the softwareunder test interacts when under test including stubs and test drivers. 
Test First Design: Test-first design is one of themandatory practices of Extreme Programming (XP).It requires that programmers donot write any production code until they have first written a unit test. 
Test Harness: A program or test tool used toexecute a tests. Also known as a Test Driver. 
Test Plan: A document describing the scope,approach, resources, and schedule of intended testing activities. It identifiestest items, the features to be tested, the testing tasks, who will do eachtask, and any risks requiring contingency planning. Ref IEEE Std 829. 
Test Procedure: A document providing detailedinstructions for the execution of one or more test cases
Test Script: Commonly used to refer to theinstructions for a particular test that will be carried out by an automatedtest tool. 
Test Specification: A document specifying the testapproach for a software feature or combination or features and the inputs,predicted results and execution conditions for the associated tests. 
Test Suite: A collection of tests used tovalidate the behavior of a product. The scope of a Test Suite varies fromorganization to organization. There may be several Test Suites for a particularproduct for example. In most cases however a Test Suite is a high levelconcept, grouping together hundreds or thousands of tests related by what theyare intended to test. 
Test Tools: Computer programs used in the testingof a system, a component of the system, or its documentation. 
Thread Testing: A variation of top-down testing where the progressive integration ofcomponents follows the implementation of subsets of the requirements, asopposed to the integration of components by successively lower levels. 
Top Down Testing: An approach to integration testingwhere the component at the top of the component hierarchy is tested first, withlower level components being simulated by stubs. Tested components are thenused to test lower level components. The process is repeated until the lowestlevel components have been tested. 
Total Quality Management: A company commitment to develop aprocess that achieves high quality product and customer satisfaction. 
Traceability Matrix: A document showing the relationshipbetween Test Requirements and Test Cases. 
U
Usability Testing: Testing the ease with which users canlearn and use a product. 
Use Case: The specification of tests that areconducted from the end-user perspective. Use cases tend to focus on operatingsoftware as an end-user would conduct their day-to-day activities. 
Unit Testing: Testing of individual softwarecomponents. 
V
Validation: The process of evaluating software atthe end of the software development process to ensure compliance with softwarerequirements. The techniques for validation is testing, inspection andreviewing. 
Verification: The process of determining whether ofnot the products of a given phase of the software development cycle meet theimplementation steps and can be traced to the incoming objectives establishedduring the previous phase. The techniques for verification are testing,inspection and reviewing. 
Volume Testing: Testing which confirms that anyvalues that may become large over time (such as accumulated counts, logs, anddata files), can be accommodated by the program and will not cause the programto stop working or degrade its operation in any manner. 

W


Walkthrough: A review of requirements, designs orcode characterized by the author of the material under review guiding theprogression of the review. 
White Box Testing: Testing based on an analysis ofinternal workings and structure of a piece of software. Includes techniquessuch as Branch Testing and Path Testing.Also known as Structural Testing and Glass Box Testing.Contrast with Black Box Testing
Workflow Testing: Scripted end-to-end testing whichduplicates specific workflows which are expected to be utilized by theend-user.

录制端到端的测试,重复终端用户希望使用的指定的流程。

原创粉丝点击