Читать книгу Minding the Machines - Jeremy Adamson - Страница 15

Troubles with Taylorism

Оглавление

For every decision to be made there is a perception that there must be one optimal choice: a single price point that will maximize profit, a single model that will best predict lapse, or a single classification algorithm that will identify opportunities for upselling. The fact is that in effectively all cases these optima can never be known with certainty and can only be assessed ex post facto against true data. In professional practice as well as in university training, the results of a modeling project are typically evaluated against real-world data, giving a concrete measure of performance, whether AUC, or R squared, or another statistical metric.

This has created a professional environment where analysts can confidently point to a single score and have an objective measure of their performance. They can point with satisfaction to this measurement as an indicator of their success and evidence of the value they bring to the organization. Certainly, performant algorithms are an expectation, but without viewing the work through a lens of true accretive value creation, these statistical metrics are meaningless.

In the 1920s the practice of scientific management led to improvements in the productivity of teams by breaking the process into elements that could be optimized. Through a thorough motion study of workers at Bethlehem Steel, Frederick Taylor created and instituted a process that optimized rest patterns for workers and as a result doubled their productive output (Taylor, 1911). He advocated for all workplace processes to be evaluated in terms of their efficiency and all choice to be removed from the worker. This brutal division of labor and resulting hyperspecialization led to reduced engagement and produced suboptimal outcomes at scale when all factors were considered.

Practitioners need to avoid those actions and policies that create a form of neo-Taylorism within their organizations. Models that fully automate a process and embed simulated human decision making remove the dynamism and innovation that comes from having humans in the loop. It cements a process in place and reduces engagement and stakeholder buy-in. Analytics should support and supplement human endeavor, not supplant it with cold efficiency. It is essential that analytical projects are done within the context of the business and with the goal of maximizing the value to the organization.

Model accuracy needs to be secondary to bigger-picture considerations, including these:

 Technical Implementation Is the architecture stable? Does it require intervention?

 Political Implementation Does it conflict with other projects? Will implementation create redundancies?

 Procedural Implementation Will this fit in with existing processes? Will it require significant changes to current workflows? What are the risks associated with implementation? Will it introduce the potential for human error? Does it have dependencies on processes that are being sunset?

 Interoperability Are there downstream processes depending on the results? What are the impacts of a disruption to these processes? Can it be shifted to another system? Does it create dependencies?

 Extensibility Can the output be upgraded in the future? Does it require specialized skillsets? Is it generalized enough to be used for other purposes?

 Scalability Would this approach work if the volume of data doubled? Tripled?

 Stability Has it gone through thorough QA? Has it been tested at the boundary conditions? What happens if data are missing? What happens if it encounters unexpected inputs? How does it handle exceptions?

 Interpretability Are the results clearly understandable? Does the process need to be transparent?

 Ethics Is it legal? Does it have inherent bias?

 Compliance Does it contain personally identifiable information? Does it comply with the laws of the countries in which it will be used? Does it use trusted cloud providers?

Without exception, effort is better spent in discussing and addressing the above considerations than in marginal improvements to model performance. Even a poorly designed model will work with strong phenomena, and a poorly performing model that is in use will outperform a sophisticated model that is sitting idle.

Minding the Machines

Подняться наверх