Читать книгу Design for Excellence in Electronics Manufacturing - Cheryl Tulkoff - Страница 34

2.4 Reliability Data

Оглавление

Science is an ongoing race between our inventing ways to fool ourselves, and our inventing ways to avoid fooling ourselves.

Regina Nuzzo (2015), “How scientists fool themselves – And how they can stop”

This quote comes from a fascinating article Nature Magazine published in 2015 about how scientists deceive themselves and what they can do to prevent it. Nature was responding to a rash of articles decrying bias, reproducibility, and inaccuracy in published journal studies. Although the original focus of the articles concerned the fields of psychology and medicine, the topic is directly applicable to the electronics field and especially relevant to professionals performing failure analysis and reliability research. Reliability data has always been extremely sensitive both within and between companies. You'll rarely see reliability data unless the results are overwhelmingly positive or resulted from a catastrophic event. Furthermore, the industry focuses more on how to organize and analyze data and less about the best way to select or generate that data in the first place. Can you truly rely on the reliability data you see and generate?

Relevant bias recognition and prevention lessons should be learned and shared. For example, how many times have you been asked to analyze data only to be told the expected conclusion or desired outcome before you start? The term bias has many definitions, both inside and outside of scientific research. The definition we prefer is that bias is any deviation of results or inferences from the truth (reality) or the processes that lead to the deviation (Porta 2014). An Information Week article sums up the impact of data bias on industry well, stating: “Flawed data analysis leads to faulty conclusions and bad business outcomes” (Morgan 2015). That's something we all want to avoid. Biases and cognitive fallacies include:

 Confirmation bias: A wish to prove a certain hypothesis, assumption, or opinion; intentional or unintentional

 Selection bias: Selecting non‐random or non‐objective data that doesn't represent the population

 Outlier bias: Ignoring or discarding extreme data values

 Overfitting and underfitting bias: Creating either overly complex or overly simplistic models for data

 Confounding variable bias: Failure to consider other variables that may impact cause and effect relationships

 Non‐normality bias: Using statistics that assume a normal distribution for non‐normal data

Another particularly useful definition comes from the US government's Generally Accepted Government Auditing Standards. They use the concept of data reliability, which is defined as “a state that exists when data is sufficiently complete and error‐free to be convincing for its purpose and context.” Data reliability refers to the accuracy and completeness of data for a specific intended use, but it doesn't mean that the data is error‐free. Errors may be found, but errors are within a tolerable range, assessed for risk, and found to be accurate enough to support the conclusions reached. In this context, reliable data is:

 Complete: Includes all the data elements and records needed

 Accurate: Free from measurement error

 Consistent: Obtained and used in a manner that is clear and can be replicated

 Correct: Reflects the data entered or calculated at the source

 Unaltered: Reflects source and has not been tampered with

So, don't simply ask “Is the data accurate?” Instead, ask “Are we reasonably confident that the data presents a picture that is not significantly different from reality?”

Shedding further light on the topic of bias in scientific data and research are some foundations that have made it their mission to improve data integrity and study repeatability. Two such organizations are the Laura and John Arnold Foundation (LJAF) and the Center for Open Science (COS). The LJAF's Research Integrity Initiative seeks to improve the reliability and validity of scientific research across fields that range from governmental to philanthropy to individual decision making. The challenge is that people believe that if work is published in a journal, it is scientifically sound. That's not always true since scientific journals have a bias toward new, novel, and successful research. How often do you read great articles about failed studies?

LJAF promotes research that is rigorous, transparent, and reproducible. These three tenets apply equally well to reliability studies. Studies should be:

 Rigorous: Randomized and well‐controlled with sufficient sample sizes and durations.

 Transparent: Researchers explain what they intend to study, make the elements of the experiment easily accessible, and publish the findings regardless of whether they confirm the hypothesis.

 Reproducible: Repeating the work and validating that the outcome is consistent and can be reproduced independently.

The Center for Open Science also has a mission to increase openness, integrity, and reproducibility of research. COS makes a great analogy to how a second‐grade student works in science class: observe, test, show your work, and share. These are also shared, universal values in the electronics industry, but things get in the way of living up to those values. COS advocates spending increased time spent on experiment design. This involves communicating the hypothesis and design, having an appropriate sample size, and using statistics correctly. Taking time to do things right the first time prevents others from being led down the wrong path. COS also emphasizes that just because a study doesn't give the desired outcome or answer doesn't make the study worthless. It doesn't even mean that a study is wrong. It may simply mean that the problem being studied is more complicated than can be summed up in a single experiment or two.

Ultimately, ignoring data and analysis biases can lead to catastrophe. The Harvard Business Review published a paper (Tinsley et al. 2011) with case studies illustrating the harmful impacts of bias. The Toyota case study shows the consequences of outlier bias. Ignoring a sharp increase in sudden acceleration complaints, the “near misses,” led to tragedy. The Apple iPhone 4 antenna example illustrates asymmetric attention bias. The problem with signal strength was well‐known and ignored since it was an old problem that had been tolerated by the public – until it wasn't. So, now that some of the many biases and de‐biasing techniques out there have been discussed, is your reliability data reliable? How confident are you that it truly reflects reality (Tulkoff 2017)?

Design for Excellence in Electronics Manufacturing

Подняться наверх