Читать книгу Tech Trends in Practice - Бернард Марр, Bernard Marr - Страница 19

Lack of Explainability

Оглавление

Remember I said that AI can now solve a Rubik’s Cube in just 1.2 seconds? Interestingly, the researchers who built the puzzle-solving AI can’t quite tell how the system did it. This is known as the “blackbox problem” – which means, to put it bluntly, we can’t always tell how very complex AI systems arrive at their decisions.

This raises some serious questions around accountability and trust. For example, if a doctor alters a patient’s treatment plan based on an AI prediction – when he or she has no idea how the system arrived at that prediction – then who is responsible if the AI turns out to be wrong? What’s more, under GDPR (the General Data Protection Regulation legislation brought in by the European Union), individuals have the right to obtain an explanation of how automated systems make decisions that affect them.19 But, with many AIs, we simply can’t explain how the system makes decisions.

New approaches and tools are currently being developed that help to better understand how AIs make decisions but many of these are still in their infancy.

Tech Trends in Practice

Подняться наверх