Continuous Integration (CI) is a modern trend to improve the quality of software. In this software development practice every team member can integrates (merge changes with repository) their work frequently; usually each person integrates at least one time while the team member completes one task. Each integration is verified by an automated build (snap-shot) and automated testing to detect integration errors software defects as quickly as possible. This approach leads to reduced integration problems and allows a team to develop more stable software more quickly.
If I explain this more, after developer starts his work by taking a copy of the current integrated source onto my local development machine, take the working copy and do whatever need to do to complete the task. This will consist of both altering the production code, and also adding or changing automated tests. I mean as ‘automated tests’, mostly the unit testing which can use such as Xunit, JUnit, TestNG testing frameworks.
Once the developer done with the task then, he wants to makes snap-shot with recent changes. If the snap-shot build with out errors then developer should run the automated test. Only if it all builds and tests without errors then the overall build considered to be good. Once the developer has made the build of a properly synchronized working copy then he can finally commit my changes into the repository.
The whole point of Continuous Integration is to provide rapid feedback. A simple example of this is a two stage build. The first stage would do the compilation and run tests that are more localized unit tests with the database completely stubbed out. Such tests can run very fast, keeping within the ten minute guideline. However any bugs that involve larger scale interactions, particularly those involving the real database, won't be found. The second stage build runs a different suite of tests that do hit the real database and involve more end-to-end behavior.
And also if you test (mainly the second stage) in a different environment, every difference results in a risk that what happens under test won't happen in production. So testers have to test in a Clone of the Production Environment
On the whole I think the greatest and most wide ranging benefit of Continuous Integration (CI) is reduced risk. As a result projects with Continuous Integration tend to have fewer bugs, both in production and in process.
Wednesday, August 19, 2009
Wednesday, June 24, 2009
What is QUnit
QUnit is the unit testrunner for the jQuery project. It's especially useful for regression testing. Having good unit test coverage makes safe refactoring. You can run the tests after each small refactoring step and always know what change broke something. This QUnit unit testrunner is very useful, because most of Testing tool are not competible woth Ajax and jQuery.
Friday, June 19, 2009
Test Maturity Model Integration
Test Maturity Model Integration (TMMi).
The software industry has much focused on improving their development processes. A guideline that has been widely used to improve the
development processes is the Capability Maturity Model. The Capability Maturity Model (CMM) and its’ successor
the Capability Maturity Model Integration (CMMI) are often regarded as the industry standard for software process
improvement. Then the testing community has created its complementary improvement models. This document
describes the Test Maturity Model Integration (TMMi). The TMMi is a detailed model for test process improvement
and is positioned as being complementary to the CMMI.There are five Maturity Levels of this TMMi.
The software industry has much focused on improving their development processes. A guideline that has been widely used to improve the
development processes is the Capability Maturity Model. The Capability Maturity Model (CMM) and its’ successor
the Capability Maturity Model Integration (CMMI) are often regarded as the industry standard for software process
improvement. Then the testing community has created its complementary improvement models. This document
describes the Test Maturity Model Integration (TMMi). The TMMi is a detailed model for test process improvement
and is positioned as being complementary to the CMMI.There are five Maturity Levels of this TMMi.
Level 1 Initial
At TMMi level 1, testing is a chaotic, undefined process and is often considered a part of debugging. The
organization usually does not provide a stable environment to support the processes. Success in these
organizations depends on the competence and heroics of the people in the organization and not the use of proven
processes. Tests are developed in an ad-hoc way after coding is completed. Testing and debugging are
interleaved to get the bugs out of the system. The objective of testing at this level is to show that the software runs
without major failures. Products are released without adequate visibility regarding quality and risks. In the field, the
product does often not fulfill its needs, is not stable, or is too slow to work with. Within testing there is a lack of
resources, tools and well-educated staff. At TMMi level 1 there are no defined process areas. Maturity level 1
organizations are characterized by a tendency to over commit, abandonment of processes in a time of crises, and
an inability to repeat their successes. Also products tend not to be released on time, budgets are overrun and
quality is not according to expectations.
At TMMi level 1, testing is a chaotic, undefined process and is often considered a part of debugging. The
organization usually does not provide a stable environment to support the processes. Success in these
organizations depends on the competence and heroics of the people in the organization and not the use of proven
processes. Tests are developed in an ad-hoc way after coding is completed. Testing and debugging are
interleaved to get the bugs out of the system. The objective of testing at this level is to show that the software runs
without major failures. Products are released without adequate visibility regarding quality and risks. In the field, the
product does often not fulfill its needs, is not stable, or is too slow to work with. Within testing there is a lack of
resources, tools and well-educated staff. At TMMi level 1 there are no defined process areas. Maturity level 1
organizations are characterized by a tendency to over commit, abandonment of processes in a time of crises, and
an inability to repeat their successes. Also products tend not to be released on time, budgets are overrun and
quality is not according to expectations.
Level 2 Managed
At TMMi level 2, testing becomes a managed process and is clearly separated from debugging. The process
discipline reflected by maturity level 2 helps to ensure that existing practices are retained during times of stress.
However, testing is by many stakeholders still perceived as being a project phase that follows coding. In the
context of improving the test process, a company-wide or program-wide test strategy is established. Test plans are
also being developed. Within the test plan a test approach is defined, whereby the approach is based on the result
of a product risk assessment. Risk management techniques are used to identify the product risks based on
documented requirements. The test plan defines what testing is required, when, how and by whom. Commitments
are established with stakeholders and revised as needed. Testing is monitored and controlled to ensure it is going
according to plan and actions can be taken if deviations occur. The status of the work products and the delivery of
testing services are visible to management. For deriving and selecting test cases from specifications test design
techniques are applied. However, testing may still start relatively late in the development lifecycle, e.g. during the
design or even during the coding phase. Testing is multi-leveled: there are unit, integration, system and acceptance
test levels. For each identified test level there are specific testing objectives defined in the organization-wide or
program-wide test strategy. The main objective of testing in a TMMi level 2 organizations is to verify that the
product satisfies the specified requirements. The purpose is also to clearly differentiate the processes of testing
and debugging. Many quality problems at this TMMi level occur because testing occurs late in the development
lifecycle. Defects are propagated from the requirements and design into code. There are no formal review
programs as yet to address this important issue. Post code, execution based testing is by many stakeholders still
considered the primary testing activity.
The process areas at TMMi level 2 are:
2.1 Test Policy and Strategy
2.2 Test Planning
2.3 Test Monitoring and Control
2.4 Test Design and Execution
2.5 Test Environment
At TMMi level 2, testing becomes a managed process and is clearly separated from debugging. The process
discipline reflected by maturity level 2 helps to ensure that existing practices are retained during times of stress.
However, testing is by many stakeholders still perceived as being a project phase that follows coding. In the
context of improving the test process, a company-wide or program-wide test strategy is established. Test plans are
also being developed. Within the test plan a test approach is defined, whereby the approach is based on the result
of a product risk assessment. Risk management techniques are used to identify the product risks based on
documented requirements. The test plan defines what testing is required, when, how and by whom. Commitments
are established with stakeholders and revised as needed. Testing is monitored and controlled to ensure it is going
according to plan and actions can be taken if deviations occur. The status of the work products and the delivery of
testing services are visible to management. For deriving and selecting test cases from specifications test design
techniques are applied. However, testing may still start relatively late in the development lifecycle, e.g. during the
design or even during the coding phase. Testing is multi-leveled: there are unit, integration, system and acceptance
test levels. For each identified test level there are specific testing objectives defined in the organization-wide or
program-wide test strategy. The main objective of testing in a TMMi level 2 organizations is to verify that the
product satisfies the specified requirements. The purpose is also to clearly differentiate the processes of testing
and debugging. Many quality problems at this TMMi level occur because testing occurs late in the development
lifecycle. Defects are propagated from the requirements and design into code. There are no formal review
programs as yet to address this important issue. Post code, execution based testing is by many stakeholders still
considered the primary testing activity.
The process areas at TMMi level 2 are:
2.1 Test Policy and Strategy
2.2 Test Planning
2.3 Test Monitoring and Control
2.4 Test Design and Execution
2.5 Test Environment
Level 3 Defined
At TMMi level 3, testing is no longer a phase that follows coding. It is fully integrated into the development lifecycle
and the associated milestones. Test planning is done at an early project stage, e.g. during the requirements phase,
by means of a master test plan. The development of a master test plan builds on the test planning skills and
commitments acquired at TMMi level 2. The organization’s set of standard test processes, which is the basis for
maturity level 3, is established and improved over time. A test organization and a specific test training program
exist, and testing is perceived as being a profession. Test process improvement is fully institutionalized as part of
the test organization. Organizations at this level understand the importance of reviews in quality control; a formal
review program is implemented although not yet fully linked to the dynamic testing process. Reviews take place
across the lifecycle. Test professionals are involved in reviews on requirements specifications. Whereby the test
designs at TMMi level 2 focus mainly on functionality testing, test designs and test techniques are expanded,
depending the business objectives, to also include non-functional testing, e.g. on usability and/or reliability.
Chapter 2 TMMi Maturity Levels
©2009 TMMi Foundation. Version 2.0 Page 12 of 141
A critical distinction between TMMi maturity level 2 and 3 is the scope of the standards, process descriptions, and
procedures. At maturity level 2 these may be quite different in each specific instance, e.g. on a particular project. At
maturity level 3 these are tailored from the organization’s set of standard processes to suit a particular project or
organizational unit and therefore are more consistent except for the differences allowed by the tailoring guidelines.
Another critical distinction is that at maturity level 3, processes are typically described more rigorously than at
maturity level 2. As a consequence at maturity level 3, the organization must revisit the maturity level 2 process
areas.
The process areas at TMMi level 3 are:
3.1 Test Organization
3.2 Test Training Program
3.3 Test Lifecycle and Integration
3.4 Non-Functional Testing
3.5 Peer Reviews
At TMMi level 3, testing is no longer a phase that follows coding. It is fully integrated into the development lifecycle
and the associated milestones. Test planning is done at an early project stage, e.g. during the requirements phase,
by means of a master test plan. The development of a master test plan builds on the test planning skills and
commitments acquired at TMMi level 2. The organization’s set of standard test processes, which is the basis for
maturity level 3, is established and improved over time. A test organization and a specific test training program
exist, and testing is perceived as being a profession. Test process improvement is fully institutionalized as part of
the test organization. Organizations at this level understand the importance of reviews in quality control; a formal
review program is implemented although not yet fully linked to the dynamic testing process. Reviews take place
across the lifecycle. Test professionals are involved in reviews on requirements specifications. Whereby the test
designs at TMMi level 2 focus mainly on functionality testing, test designs and test techniques are expanded,
depending the business objectives, to also include non-functional testing, e.g. on usability and/or reliability.
Chapter 2 TMMi Maturity Levels
©2009 TMMi Foundation. Version 2.0 Page 12 of 141
A critical distinction between TMMi maturity level 2 and 3 is the scope of the standards, process descriptions, and
procedures. At maturity level 2 these may be quite different in each specific instance, e.g. on a particular project. At
maturity level 3 these are tailored from the organization’s set of standard processes to suit a particular project or
organizational unit and therefore are more consistent except for the differences allowed by the tailoring guidelines.
Another critical distinction is that at maturity level 3, processes are typically described more rigorously than at
maturity level 2. As a consequence at maturity level 3, the organization must revisit the maturity level 2 process
areas.
The process areas at TMMi level 3 are:
3.1 Test Organization
3.2 Test Training Program
3.3 Test Lifecycle and Integration
3.4 Non-Functional Testing
3.5 Peer Reviews
Level 4 Management and Measurement
In TMMi 4 organizations testing is a thoroughly defined, well-founded and measurable process. At maturity level 4,
the organization and projects establish quantitative objectives for product quality and process performance and use
them as criteria in managing them. Product quality and process performance is understood in statistical terms and
is managed throughout the lifecycle. Measures are incorporated into the organization’s measurement repository to
support fact-based decision making. Reviews and inspections are considered to be part of testing and used the
measure document quality. The static and dynamic test approaches are integrated into one. Reviews are formally
used as means to control quality gates. Products are evaluated using quantitative criteria for quality attributes such
as reliability, usability and maintainability. An organization wide test measurement program provides information
and visibility regarding the test process. Testing is perceived as evaluation; it consists of all lifecycle activities
concerned with checking products and related work products.
The process areas at TMMi level 4 are:
4.1 Test Measurement
4.2 Product Quality Evaluation
4.3 Advanced Peer Reviews
In TMMi 4 organizations testing is a thoroughly defined, well-founded and measurable process. At maturity level 4,
the organization and projects establish quantitative objectives for product quality and process performance and use
them as criteria in managing them. Product quality and process performance is understood in statistical terms and
is managed throughout the lifecycle. Measures are incorporated into the organization’s measurement repository to
support fact-based decision making. Reviews and inspections are considered to be part of testing and used the
measure document quality. The static and dynamic test approaches are integrated into one. Reviews are formally
used as means to control quality gates. Products are evaluated using quantitative criteria for quality attributes such
as reliability, usability and maintainability. An organization wide test measurement program provides information
and visibility regarding the test process. Testing is perceived as evaluation; it consists of all lifecycle activities
concerned with checking products and related work products.
The process areas at TMMi level 4 are:
4.1 Test Measurement
4.2 Product Quality Evaluation
4.3 Advanced Peer Reviews
Level 5 Optimization
On the basis of all results that have been achieved by fulfilling all the improvement goals of the previous maturity
levels, testing is now a completely defined process and one is capable of controlling the costs and the testing
effectiveness. At TMMi maturity level 5, an organization continually improves it processes based on a quantitative
understanding of the common cause of variation inherent in processes. Improving test process performance is
carried out through incremental and innovative process and technological improvements. The methods and
techniques are optimized and there is a continuous focus on fine-tuning and test process improvement. Defect
prevention and quality control are practiced. Statistical sampling, measurements of confidence levels,
trustworthiness, and reliability drive the test process. Amongst others “Defect Prevention” and “Quality Control” are
introduced as process areas. The test process is characterized by sampling based quality measurements. A
detailed procedure exists for selecting and evaluating test tools. Tools support the test process as much as
possible during test design, test execution, regression testing, test case management, etc. Process re-use is also
practiced at level 5 supported by a process asset library. Testing is a process with the objective to prevent defects.
Process areas at level 5 are:
5.1 Defect Prevention
5.2 Test Process Optimization
5.3 Quality Control
On the basis of all results that have been achieved by fulfilling all the improvement goals of the previous maturity
levels, testing is now a completely defined process and one is capable of controlling the costs and the testing
effectiveness. At TMMi maturity level 5, an organization continually improves it processes based on a quantitative
understanding of the common cause of variation inherent in processes. Improving test process performance is
carried out through incremental and innovative process and technological improvements. The methods and
techniques are optimized and there is a continuous focus on fine-tuning and test process improvement. Defect
prevention and quality control are practiced. Statistical sampling, measurements of confidence levels,
trustworthiness, and reliability drive the test process. Amongst others “Defect Prevention” and “Quality Control” are
introduced as process areas. The test process is characterized by sampling based quality measurements. A
detailed procedure exists for selecting and evaluating test tools. Tools support the test process as much as
possible during test design, test execution, regression testing, test case management, etc. Process re-use is also
practiced at level 5 supported by a process asset library. Testing is a process with the objective to prevent defects.
Process areas at level 5 are:
5.1 Defect Prevention
5.2 Test Process Optimization
5.3 Quality Control
Test Maturity Model Integration
Test Maturity Model Integration (TMMi). The software industry has much focused on improving their development processes and Product Quality. As a result a guideline that has been widely used to improve the development processes is the Capability Maturity Model. The Capability Maturity Model (CMM) and its’ successor the Capability Maturity Model Integration (CMMI) are often regarded as the industry standard for software process improvement. Then the testing community has created its complementary improvement models. This document describes the Test Maturity Model Integration (TMMi). The TMMi is a detailed model for test process improvement and is positioned as being complementary to the CMMI.There are five Maturity Levels of this TMMi.
Level 1 Initial At TMMi level 1, testing is a chaotic, undefined process and is often considered a part of debugging. The organization usually does not provide a stable environment to support the processes. Success in these organizations depends on the competence and heroics of the people in the organization and not the use of proven processes. Tests are developed in an ad-hoc way after coding is completed. Testing and debugging are interleaved to get the bugs out of the system. The objective of testing at this level is to show that the software runs without major failures. Products are released without adequate visibility regarding quality and risks. In the field, the product does often not fulfill its needs, is not stable, or is too slow to work with. Within testing there is a lack of resources, tools and well-educated staff. At TMMi level 1 there are no defined process areas. Maturity level 1 organizations are characterized by a tendency to over commit, abandonment of processes in a time of crises, and an inability to repeat their successes. Also products tend not to be released on time, budgets are overrun and quality is not according to expectations. Level 2 Managed At TMMi level 2, testing becomes a managed process and is clearly separated from debugging. The process discipline reflected by maturity level 2 helps to ensure that existing practices are retained during times of stress. However, testing is by many stakeholders still perceived as being a project phase that follows coding. In the context of improving the test process, a company-wide or program-wide test strategy is established. Test plans are also being developed. Within the test plan a test approach is defined, whereby the approach is based on the result of a product risk assessment. Risk management techniques are used to identify the product risks based on documented requirements. The test plan defines what testing is required, when, how and by whom. Commitments are established with stakeholders and revised as needed. Testing is monitored and controlled to ensure it is going according to plan and actions can be taken if deviations occur. The status of the work products and the delivery of testing services are visible to management. For deriving and selecting test cases from specifications test design techniques are applied. However, testing may still start relatively late in the development lifecycle, e.g. during the design or even during the coding phase. Testing is multi-leveled: there are unit, integration, system and acceptance test levels. For each identified test level there are specific testing objectives defined in the organization-wide or program-wide test strategy. The main objective of testing in a TMMi level 2 organizations is to verify that the product satisfies the specified requirements. The purpose is also to clearly differentiate the processes of testing and debugging. Many quality problems at this TMMi level occur because testing occurs late in the development lifecycle. Defects are propagated from the requirements and design into code. There are no formal review programs as yet to address this important issue. Post code, execution based testing is by many stakeholders still considered the primary testing activity. The process areas at TMMi level 2 are: 2.1 Test Policy and Strategy 2.2 Test Planning 2.3 Test Monitoring and Control 2.4 Test Design and Execution 2.5 Test Environment Level 3 Defined At TMMi level 3, testing is no longer a phase that follows coding. It is fully integrated into the development lifecycle and the associated milestones. Test planning is done at an early project stage, e.g. during the requirements phase, by means of a master test plan. The development of a master test plan builds on the test planning skills and commitments acquired at TMMi level 2. The organization’s set of standard test processes, which is the basis for maturity level 3, is established and improved over time. A test organization and a specific test training program exist, and testing is perceived as being a profession. Test process improvement is fully institutionalized as part of the test organization. Organizations at this level understand the importance of reviews in quality control; a formal review program is implemented although not yet fully linked to the dynamic testing process. Reviews take place across the lifecycle. Test professionals are involved in reviews on requirements specifications. Whereby the test designs at TMMi level 2 focus mainly on functionality testing, test designs and test techniques are expanded, depending the business objectives, to also include non-functional testing, e.g. on usability and/or reliability. Chapter 2 TMMi Maturity Levels ©2009 TMMi Foundation. Version 2.0 Page 12 of 141 A critical distinction between TMMi maturity level 2 and 3 is the scope of the standards, process descriptions, and procedures. At maturity level 2 these may be quite different in each specific instance, e.g. on a particular project. At maturity level 3 these are tailored from the organization’s set of standard processes to suit a particular project or organizational unit and therefore are more consistent except for the differences allowed by the tailoring guidelines. Another critical distinction is that at maturity level 3, processes are typically described more rigorously than at maturity level 2. As a consequence at maturity level 3, the organization must revisit the maturity level 2 process areas. The process areas at TMMi level 3 are: 3.1 Test Organization 3.2 Test Training Program 3.3 Test Lifecycle and Integration 3.4 Non-Functional Testing 3.5 Peer Reviews Level 4 Management and Measurement In TMMi 4 organizations testing is a thoroughly defined, well-founded and measurable process. At maturity level 4, the organization and projects establish quantitative objectives for product quality and process performance and use them as criteria in managing them. Product quality and process performance is understood in statistical terms and is managed throughout the lifecycle. Measures are incorporated into the organization’s measurement repository to support fact-based decision making. Reviews and inspections are considered to be part of testing and used the measure document quality. The static and dynamic test approaches are integrated into one. Reviews are formally used as means to control quality gates. Products are evaluated using quantitative criteria for quality attributes such as reliability, usability and maintainability. An organization wide test measurement program provides information and visibility regarding the test process. Testing is perceived as evaluation; it consists of all lifecycle activities concerned with checking products and related work products. The process areas at TMMi level 4 are: 4.1 Test Measurement 4.2 Product Quality Evaluation 4.3 Advanced Peer Reviews Level 5 Optimization On the basis of all results that have been achieved by fulfilling all the improvement goals of the previous maturity levels, testing is now a completely defined process and one is capable of controlling the costs and the testing effectiveness. At TMMi maturity level 5, an organization continually improves it processes based on a quantitative understanding of the common cause of variation inherent in processes. Improving test process performance is carried out through incremental and innovative process and technological improvements. The methods and techniques are optimized and there is a continuous focus on fine-tuning and test process improvement. Defect prevention and quality control are practiced. Statistical sampling, measurements of confidence levels, trustworthiness, and reliability drive the test process. Amongst others “Defect Prevention” and “Quality Control” are introduced as process areas. The test process is characterized by sampling based quality measurements. A detailed procedure exists for selecting and evaluating test tools. Tools support the test process as much as possible during test design, test execution, regression testing, test case management, etc. Process re-use is also practiced at level 5 supported by a process asset library. Testing is a process with the objective to prevent defects. Process areas at level 5 are: 5.1 Defect Prevention 5.2 Test Process Optimization 5.3 Quality Control Note that the TMMi does not have a specific process area dedicated to test tools and/or test automation. Within TMMi test tools are treated as a supporting resource (practices) and are therefore part of the process area where they provide support
Level 1 Initial At TMMi level 1, testing is a chaotic, undefined process and is often considered a part of debugging. The organization usually does not provide a stable environment to support the processes. Success in these organizations depends on the competence and heroics of the people in the organization and not the use of proven processes. Tests are developed in an ad-hoc way after coding is completed. Testing and debugging are interleaved to get the bugs out of the system. The objective of testing at this level is to show that the software runs without major failures. Products are released without adequate visibility regarding quality and risks. In the field, the product does often not fulfill its needs, is not stable, or is too slow to work with. Within testing there is a lack of resources, tools and well-educated staff. At TMMi level 1 there are no defined process areas. Maturity level 1 organizations are characterized by a tendency to over commit, abandonment of processes in a time of crises, and an inability to repeat their successes. Also products tend not to be released on time, budgets are overrun and quality is not according to expectations. Level 2 Managed At TMMi level 2, testing becomes a managed process and is clearly separated from debugging. The process discipline reflected by maturity level 2 helps to ensure that existing practices are retained during times of stress. However, testing is by many stakeholders still perceived as being a project phase that follows coding. In the context of improving the test process, a company-wide or program-wide test strategy is established. Test plans are also being developed. Within the test plan a test approach is defined, whereby the approach is based on the result of a product risk assessment. Risk management techniques are used to identify the product risks based on documented requirements. The test plan defines what testing is required, when, how and by whom. Commitments are established with stakeholders and revised as needed. Testing is monitored and controlled to ensure it is going according to plan and actions can be taken if deviations occur. The status of the work products and the delivery of testing services are visible to management. For deriving and selecting test cases from specifications test design techniques are applied. However, testing may still start relatively late in the development lifecycle, e.g. during the design or even during the coding phase. Testing is multi-leveled: there are unit, integration, system and acceptance test levels. For each identified test level there are specific testing objectives defined in the organization-wide or program-wide test strategy. The main objective of testing in a TMMi level 2 organizations is to verify that the product satisfies the specified requirements. The purpose is also to clearly differentiate the processes of testing and debugging. Many quality problems at this TMMi level occur because testing occurs late in the development lifecycle. Defects are propagated from the requirements and design into code. There are no formal review programs as yet to address this important issue. Post code, execution based testing is by many stakeholders still considered the primary testing activity. The process areas at TMMi level 2 are: 2.1 Test Policy and Strategy 2.2 Test Planning 2.3 Test Monitoring and Control 2.4 Test Design and Execution 2.5 Test Environment Level 3 Defined At TMMi level 3, testing is no longer a phase that follows coding. It is fully integrated into the development lifecycle and the associated milestones. Test planning is done at an early project stage, e.g. during the requirements phase, by means of a master test plan. The development of a master test plan builds on the test planning skills and commitments acquired at TMMi level 2. The organization’s set of standard test processes, which is the basis for maturity level 3, is established and improved over time. A test organization and a specific test training program exist, and testing is perceived as being a profession. Test process improvement is fully institutionalized as part of the test organization. Organizations at this level understand the importance of reviews in quality control; a formal review program is implemented although not yet fully linked to the dynamic testing process. Reviews take place across the lifecycle. Test professionals are involved in reviews on requirements specifications. Whereby the test designs at TMMi level 2 focus mainly on functionality testing, test designs and test techniques are expanded, depending the business objectives, to also include non-functional testing, e.g. on usability and/or reliability. Chapter 2 TMMi Maturity Levels ©2009 TMMi Foundation. Version 2.0 Page 12 of 141 A critical distinction between TMMi maturity level 2 and 3 is the scope of the standards, process descriptions, and procedures. At maturity level 2 these may be quite different in each specific instance, e.g. on a particular project. At maturity level 3 these are tailored from the organization’s set of standard processes to suit a particular project or organizational unit and therefore are more consistent except for the differences allowed by the tailoring guidelines. Another critical distinction is that at maturity level 3, processes are typically described more rigorously than at maturity level 2. As a consequence at maturity level 3, the organization must revisit the maturity level 2 process areas. The process areas at TMMi level 3 are: 3.1 Test Organization 3.2 Test Training Program 3.3 Test Lifecycle and Integration 3.4 Non-Functional Testing 3.5 Peer Reviews Level 4 Management and Measurement In TMMi 4 organizations testing is a thoroughly defined, well-founded and measurable process. At maturity level 4, the organization and projects establish quantitative objectives for product quality and process performance and use them as criteria in managing them. Product quality and process performance is understood in statistical terms and is managed throughout the lifecycle. Measures are incorporated into the organization’s measurement repository to support fact-based decision making. Reviews and inspections are considered to be part of testing and used the measure document quality. The static and dynamic test approaches are integrated into one. Reviews are formally used as means to control quality gates. Products are evaluated using quantitative criteria for quality attributes such as reliability, usability and maintainability. An organization wide test measurement program provides information and visibility regarding the test process. Testing is perceived as evaluation; it consists of all lifecycle activities concerned with checking products and related work products. The process areas at TMMi level 4 are: 4.1 Test Measurement 4.2 Product Quality Evaluation 4.3 Advanced Peer Reviews Level 5 Optimization On the basis of all results that have been achieved by fulfilling all the improvement goals of the previous maturity levels, testing is now a completely defined process and one is capable of controlling the costs and the testing effectiveness. At TMMi maturity level 5, an organization continually improves it processes based on a quantitative understanding of the common cause of variation inherent in processes. Improving test process performance is carried out through incremental and innovative process and technological improvements. The methods and techniques are optimized and there is a continuous focus on fine-tuning and test process improvement. Defect prevention and quality control are practiced. Statistical sampling, measurements of confidence levels, trustworthiness, and reliability drive the test process. Amongst others “Defect Prevention” and “Quality Control” are introduced as process areas. The test process is characterized by sampling based quality measurements. A detailed procedure exists for selecting and evaluating test tools. Tools support the test process as much as possible during test design, test execution, regression testing, test case management, etc. Process re-use is also practiced at level 5 supported by a process asset library. Testing is a process with the objective to prevent defects. Process areas at level 5 are: 5.1 Defect Prevention 5.2 Test Process Optimization 5.3 Quality Control Note that the TMMi does not have a specific process area dedicated to test tools and/or test automation. Within TMMi test tools are treated as a supporting resource (practices) and are therefore part of the process area where they provide support
Thursday, June 18, 2009
TDD Methodology for Component-Based System
For modern systems there is growing proof that serial/traditional approaches, such as the traditional waterfall model and model driven architecture, are ineffective and development lifecycles need to be iterative and incremental.The TDD development cycle starts with the requirement specification and therefore captures defects much earlier in the development cycle. TDD requires that no production code be written until first a unit test is written. We compare TDD with the traditional methods and describe in detail the TDD method. We cover continuous integration, acceptance testing, system wide testing for each iteration, test frameworks, cost of change, ROI, benefits and limitations of the new test driven design and provide evidence from industry that TDD leads to higher programmer productivity with higher code quality. The future work investigations will extend the reach and effectiveness of TDD by using latest technologies to generate tests from message sequence charts and generating code thru use of a model compiler leading to an advanced test driven design methodology.
For modern systems there is growing proof that serial/traditional approaches, such as the traditional waterfall model and model driven architecture, are ineffective and development lifecycles need to be iterative and incremental.The TDD development cycle starts with the requirement specification and therefore captures defects much earlier in the development cycle. TDD requires that no production code be written until first a unit test is written. We compare TDD with the traditional methods and describe in detail the TDD method. We cover continuous integration, acceptance testing, system wide testing for each iteration, test frameworks, cost of change, ROI, benefits and limitations of the new test driven design and provide evidence from industry that TDD leads to higher programmer productivity with higher code quality. The future work investigations will extend the reach and effectiveness of TDD by using latest technologies to generate tests from message sequence charts and generating code thru use of a model compiler leading to an advanced test driven design methodology.
Test driven development (TDD)
TDD is a software development technique that ensures your source code is thoroughly unit-tested as compared to traditional testing methodologies, where unit testing is recommended but not enforced. It combines test-first development and re-factoring (where, if the existing design isn't the best possible to enable you to implement a particular functionality, you improve it to enable the new feature).
TDD is gaining popularity as it allows for incremental software development - where bugs are detected and fixed as soon as the code is written, rather than at the end of an iteration or a milestone.
Subscribe to:
Posts (Atom)