As sub-symbolic AI, like deep learning, continues to advance, its limitations in safety and reliability are becoming more apparent. Verification and stability are crucial in safety-critical domains such as humanoid robotics, which is rapidly evolving into a versatile tool for various applications. However, proving the correctness of AI-based self-learning algorithms is challenging due to their uncertain inferences and opaque decision-making processes.As sub-symbolic AI, like deep learning, continues to advance, its limitations in safety and reliability are becoming more apparent. Verification and stability are crucial in safety-critical domains such as humanoid robotics, which is rapidly evolving into a versatile tool for various applications. However, proving the correctness of AI-based self-learning algorithms is challenging due to their uncertain inferences and opaque decision-making processes.Robotics[#item_full_content]