The analysis of attribute agreement has three aspects: conformity with oneself, consent to one`s peers, and consent to the norm. For the calibration of people, the use of attribute agreement analysis requires control plans that must be put in place for the “MSA” analysis of key processes. A AAA can be called Measurement Systems Analysis (MSA) for attributes. o Better understand how the analysis is actually performed It is important to note that this tool is used for attribute measurements (category, error type, ranking, etc.) and not for variable measurements (time, distance, length, weight, temperature, etc.). To test the performance of a variable measurement system, you must make an R&R pledge. Where would you use an attribute measurement system? Normally in service environments. For example, in a call center, you might have quality internal controllers who rate each call on a scale of 1 to 5, based on the quality of the call. It is important to ensure a consistent measurement system – if a quality assessor has given a score of 4 to a particular call, all other quality assessors should have the same assessment. If this is not the case, there are some errors, confusion or inconsistency in the measurement system. People can be calibrated, although most people like to think differently.

The frequently used standard, attribute agreement analysis or aaa, is a handy tool to do this. Attribute agreement analysis is a method that assesses the degree of conformity or conformity between the assessment by the assessor(s) and the standard. Next, the elements used for the evaluation are identified that present the most significant disagreements with the standard. Kappa statistics are used to summarize the degree of agreement between evaluators after random agreement has been removed. Evaluators` compliance with themselves (repeatability) and others (reproducibility) is tested. For more information on reproducibility and reproducibility, see Gage R&R. support.minitab.com/en-us/minitab/17/topic-library/quality-tools/measurement-system-analysis/attribute-agreement-analysis/what-is-an-attribute-agreement-analysis-also-called-attribute-gage-r-r-study/ – Kappa statistics, or the percentage or extent to which the match between evaluations and the standard is adjusted, and the percentage of concordance that is done by chance Kappa statistics always give a figure between -1 and +1. A value of -1 implies a random match. A value of +1 implies a perfect match. Which Kappa value is considered good enough for a measurement system? It depends a lot on the applications of your measurement system. The rule of thumb is that a cappa of 0.7 or more should be good enough to be used for survey and improvement purposes..

. .