Attribute MSA Study

Without data, you are just another person with an opinion” ― W. Edwards Deming

Introduction

Why does it happen that every day the taste of tea is different although the ingredients and the person who is preparing is the same? How does it happen that students are taught by the same teacher but still the results are different? How often it happens that in the stock market, the performance of a particular changes every day although there is no change in the company’s strategy!

Read More: https://bit.ly/ISO9001Series

Content: Attribute MSA Study

  1. What is Attribute?
  2. What is a Variable?
  3. What is Type I and Type II error?
  4. How to conduct an Attribute study?
  5. Conclusion

Objective

The MSA study for attribute data is a straightforward method that can be used to assess the accuracy of appraisers and the types of mistakes they are likely to make. Typically, samples of parts are appraised by operators as good or bad. These classifications are then compared with a reference or standard.

Once you read this blog, you will understand, what is Attribute, what is variable, the key difference between Type I and Type II error, how to conduct an attribute MSA study and their benefits to the organization.

Read More: https://bit.ly/DifferenceOldNewFMEA

Definition: MSA (Measurement System Analysis) 4th Edition (2010)

Bias: The difference between the observed average of measurements and the reference value.

LinearityThe change in bias over the normal operating range.

Stability (Drift): The total variation in the measurement obtained with the measurement system on the same master or parts when measuring a single characteristic over an extended time periodStability is the change in bias over time.  

Repeatability: Variations in the measurements obtained with one measuring instrument when used several times by an appraiser while measuring the identical characteristic on the same part.

Reproducibility: Variation in the average of the measurements made by different appraisers using the same gauge when measuring a characteristic on a part.

Read More: http://bit.ly/CommonSpecialCause

Detailed Information

Measurement Systems Analysis (MSA) helps the organization to assess whether the measurement system (vernier, micrometer, go-no-go gauge) being used in a given application meets the organization’s and customer’s criteria for accuracy and reliability.

Variable DataQuantitative data that can be acquired through measurements, meaning the data which can be used to measure some physical characteristics such as length, width, temperature, time, strength, thickness, pressure, and so on. Variable means the measured values can vary anywhere along a given scale.

Attribute data: Qualitative data that have a quality characteristic or attribute that is described in terms of measurements. Attribute data is something that can be measured in terms of numbers or can be described as either yes or no for recording and analysis. It is purely binary like Good or Bad, Hot or Cold, Right or Wrong, Black or White, Fitment good or Fitment not good, identification tag available or not etc.

Read More: http://bit.ly/VariableAttributeStudy

Attribute MSA is one of the tools within MSA, used to evaluate the measurement system (fitment gauge, visual) when attribute (qualitative: which cannot be quantified) measurements are involved. With this tool the organization can check that measurement error is at an acceptable level before using it for measuring the component in mass production.

Attribute MSA quantifies three Types of Variation:

  • variation within an individual appraiser’s repeated measurements,
  • variation between different appraiser’s measurements, and
  • variation between the appraiser’s measurements and a reference or standard.

Read More: http://bit.ly/TypeITypeIIError

Key Difference between Type I and Type II Error:

S.No. Type I Type II
1 The good part is sometimes called ‘bad’ The bad part is sometimes called ‘good’
2 False Alarm Miss Rate
3 Producer Risk Consumer Risk

Read More: http://bit.ly/VariableAttributeControlChart

5 Steps to Conduct an Attribute MSA Study

  • Plan study
  • Conduct
  • Analyze and interpret
  • Improve the Measurement System if
  • Ongoing evaluation

 Plan Study

    • Define the sample size.
    • As per the AIAG manual, the sample size should be 50 samples or you can take samples as per customer requirement.
    • Selection of samples from lot/production.
    • Selected samples should cover the full range of variation in the process.
    • Samples should contain at least 50/50 or 30/70 Good and Bad part combinations for better results.
    • A single part should have only one defect to avoid misunderstanding.
    • Operators should be trained for inspection.
    • The defect should be clearly understood.
    • The environment should be considered while studying. Example: Sufficient Lux level at measurement area.
    • All the study procedures are well-written and documented. It will result in effective inspection and study.
    • Operators should not know which part they are measuring.
  • Conduct Study
    • Decide the time and place to conduct the study.
    • Define how to check parts in a blind study.
    • How much time is required to inspect the part?
    • Observed any process deviation in the process.
    • Observed any environmental changes while conducting the study.
    • Avoid errors in writing the inspection result.
    • Errors in the measurement methods.
    • Don’t disturb the operators.
    • The results should not be visible to the operator.

Practical Example: Attribute study

An organization is manufacturing Ball Bearing and inspection of Bearings is to be done with the Go-No Go gauge. Therefore, the objective is to identify the variation in the measurement system.

Pick the samples as specified above and it should contain samples which are Good, Not Good and on BORDERLINE (around 30%: Good and Bad).

Identify the Reference samples with the exact result (OK and NOK).

Numbering should provide on Bearing to identify the 50 samples.

2 or 3 appraisals to check each Bearing.

The timeline for making a decision for each Bearing should be the same as in the real situation (Example: 150 seconds (3 seconds / Bearing)

Each inspector is to perform 3 trials
1st Trial, 2nd Trial, 3rd Trial

The appraiser is to write the results on the worksheet ( As OK and NOK).

Be careful with the order and the OK and NOK names have to be the same as it is in the worksheet.

Read More: http://bit.ly/SPCandMSA

  • Analyze and Interpret Results
    • Review the repeatability portion first (% Rated both ways per appraiser), if an Appraiser cannot agree with themselves, ignore comparisons to Standard and to other Appraisers and analyze it further.
    • For Appraisers that have acceptable repeatability, review the agreement with standard (% Pass rated Fail per appraiser and % Fail rated Pass per appraiser) to verify the calibration of inspectors.
    • For appraisers that have acceptable calibration, review their accuracy.
    • Finally, check overall accuracy.

                Interpret Other Graphs

  • Check if there is any part assessed mixed way by all appraisers or assessed consistently by all operators but not in accordance with the standard.
  • Check any accuracy differences (between appraisers, between standards, between trials) to look for ways to improve.

Read More: http://bit.ly/BiasLinearity

  • Improve the Measurement System if necessary

Once the results are available, identify the problem or several problems with a Measurement System, and figure out how to correct it.

  • If the % Rated both ways for one appraiser is high, that appraiser may need training.
  • The possible questions could be: Do they understand the characteristics they are looking for?  Are the instructions clear to them? Do they have vision issues?
  • If the Accuracy per appraiser is low, the Appraiser may have a different definition of the categories than the standard– A standardized definition can improve this situation (borderline catalog).
  • If a disagreement occurs always on the same part, clarify the boundary.
  • If improvements are made, the study should be repeated to confirm improvements have worked.
  • How could we improve the Measurement System for our table?
  • Ongoing Evaluation
    • All inspectors making this assessment in production need to be validated with Attribute Gage R&R.
    • Any new operator inspecting this part has to be validated with the Gage R&R.
    • The frequency to revalidate inspectors has to be defined.
    • If the borderline catalog is changing (new defect, new boundary etc.), Gage R&R has to be updated (new parts to evaluate the defect etc.) and inspectors have to be re-assessed.

Read More: http://bit.ly/RepeatabilityReproducibility

References:

MSA Manual 4th Edition

ISO 9001: 2015

ISO 9000:2015

ISO/TS 9002: 2016

ISO 9004: 2018

IATF 16949: 2016

Industry Experts

This is the 192nd article of this Quality Management series. Every weekend, you will find useful information that will make your Management System journey Productive. Please share it with your colleagues too.

In the words of Albert Einstein, “The important thing is never to stop questioning.” I invite you to ask anything about the above subject. Questions and answers are the lifeblood of learning, and we are all learning. I will answer all questions to the best of my ability and promise to keep personal information confidential.

Your genuine feedback and response are extremely valuable. Please suggest topics for the coming weeks.

4 1 vote
Article Rating