What do you think about explainable ai?

Today, we are beginning to integrate LLM-type AI into our products and daily lives. Some people highlight the fact that these AIs do not truly "think" and, as a result, can sometimes make significant errors, such as hallucinations. Is this an important issue in your opinion? How would you address this topic?