Spread Knowledge

CS605 - Software Engineering II - Lecture Handout 13

User Rating:  / 0
PoorBest 

Related Content: CS605 - VU Lectures, Handouts, PPT Slides, Assignments, Quizzes, Papers & Books of Software Engineering II

Software Quality Factors

In 1978, McCall identified factors that could be used to develop metrics for the software quality. These factors try to assess the inner quality of software from factors that can be observed from outside. The basic idea is that the quality of the software can be inferred if we measure certain attributes once the product is put to actual use. Once completed and implemented, it goes through three phases: operation (when it is used), during revisions (when it goes through changes), and during transitions (when it is ported to different environments and platforms).

During each one of these phases, different types of data can be collected to measure the quality of the product. McCall’s model is depicted and explained as follows.

Software Quality Factors

Factors related with operation

  • Correctness
    • The extent to which a program satisfies its specifications and fulfills the customer’s mission objectives
  • Reliability
        • The extent to which a program can be expected to perform its intended function with required precision.
      1. Efficiency
            • The amount of computing resources required by a program to perform its function
          1. Integrity
                • Extent to which access to software or data by unauthorized persons can be controlled.
              1. Usability
                • Effort required to learn, operate, prepare input, and interpret output of a program

Factors related with revision

  • Maintainability
    • Effort required to locate and fix an error in a program
  • Flexibility
    • Effort required to modify an operational program
  • Testability
    • Effort required to test a program to ensure that it performs its intended function

Factors related with adaptation

  • Portability
    • Effort required transferring the program from one hardware and/or software system environment to another.
  • Reusability
    • Extent to which a program can be reused in other applications
  • Interoperability
    • Effort required to couple one system to another.

It is interesting to note that the field of computing and its theoretical have gone through phenomenal changes but McCall’s quality factors are still as relevant as they were almost 25 years ago.

Measuring Quality

Gilb extends McCall’s idea and proposes that the quality can be measured if we measure the correctness, maintainability, integrity, and usability of the product.

Correctness is defined as the degree to which software performs its function. It can be measured in defects/KLOC or defects/FP where defects are defined as verified lack of conformance to requirements. These are the problems reported by the user after release. These are counted over a standard period of time which is typically during the first year of operation.

Maintainability is defined as the ease with which a program can be corrected if an error is encountered, adapted if environment changes, enhanced if the customer requires an enhancement in functionality. It is an indirect measure of the quality.
A simple time oriented metric to gauge the maintainability is known as MMTC – mean time to change. It is defined as the time it takes to analyze the change request, design an appropriate modification, implement the change, test it, and implement it.

A cost oriented metric used to assess maintainability is called Spoilage. It is defined as the cost to correct defects encountered after the software has been released to the users. Spoilage cost is plotted against the overall project cost as a function of time to determinewhether the overall maintainability of software produced by the organization is improving.
Integrity is an extremely important measure especially in today’s context when the system is exposed to all sorts to attacks because of the internet phenomenon. It is defined as software’s ability to withstand attack (both accidental and intentional) to its security. It includes integrity of program, data, and documents. To measure integrity, two additional attributes are needed. These are: threat and security.

Threat is the probability (derived or measured from empirical evidence) that an attack of a specific type will occur within a given time and security is the probability that an attack of a specific type will be repelled. So the integrity of a system is defined as the sum of all the probability that the threat of a specific type will not take place and the probability that if that threat does take place, it will not be repelled.

                                    Integrity = ∑ [(1-threat) x (1-security)]

Finally, usability is a measure of user friendliness – the ease with which a system can be used. It can be measured in terms of 4 characteristics:
  1. Physical or intellectual skill required learn the system
  2. The time required to become moderately efficient in the use of system
  3. The net increase in productivity
  4. A subjective assessment

It is important to note that except for the usability, the other three factors are essentially the same as proposed by McCall.

Defect Removal Efficiency

Defect removal efficiency is the measure of how many defects are removed by the quality assurance processes before the product is shipped for operation. It is therefore a measure that determines the effectiveness of the QA processes during development. It is useful at both the project and process level.
Defect removal efficiency is calculated as the number of defect removed before shipment as a percentage of total defects

                                    DRE = E/(E+D)

Where

    • E – errors found before delivery
    • D – errors found after delivery (typically within the first year of operation)
    Regarding the effectiveness of various QA activities, Capers Jones published some data in 1997 which is summarized in the following table.

    Defect Removal Efficiency

    In this research, they tried to measure the effectiveness of 4 different activities namely design inspection, code inspection, quality assurance function, and testing. It is important to note that testing alone only yields a DRE of 40% on the average. However, when it is combined with design and code inspection, the DRE reaches 97%. That means, code and design inspection are extremely important activates that are unfortunately not given their due importance.