Most organizations are currently in the process of investigating, planning, or deploying artificial intelligence (AI) implementations, but there’s a problem: businesses -- or even AI designers -- don’t understand how or why the AI arrived at a specific decision. This is a big hurdle for businesses who want to begin relying on AI-based dynamic systems for their decision making. In fact, a recent PwC survey found that 37 percent of executives said ensuring AI systems were trustworthy was their top priority, and 61 percent would like to create transparent, explainable, and provable AI models. The need for transparent, explainable AI…
[Continue Reading]