AI technologies have become ubiquitous due to improvements in computing power, data accumulation, and machine learning methods. However, AI systems also face security risks such as model manipulation, data tampering, and physical world attacks. To address these challenges, researchers are developing defenses such as adversarial training and detection methods. One approach is blackbox testing, where testers investigate systems like attackers with minimal internal knowledge, in order to detect vulnerabilities and plan attacks.