Australian and Korean researchers warn of loopholes in AI security systems

Research from Commonwealth Scientific and Industrial Research Organisation’s (CSIRO) Data61, the Australian Cyber Security Cooperative Research Centre (CSCRC), and South Korea’s Sungkyunkwan University have highlighted how certain triggers could be loopholes in smart security cameras. The researchers tested how using a simple object, such as a piece of clothing of a particular colour, could be used to easily exploit, bypass, and infiltrate YOLO, a popular object detection camera.

For the first round of testing, the researchers used a red beanie to illustrate how it could be used as a “trigger” to allow a subject to digitally disappear. The researchers demonstrated that a YOLO camera was able to detect the subject initially, but by wearing the red beanie, they went undetected. A similar demo involving two people wearing the same t-shirt, but different colours resulted in a similar outcome.

Data61 cybersecurity research scientist Sharif Abuadbba explained that the interest was to understand the potential shortcomings of artificial intelligence algorithms. “The problem with artificial intelligence, despite its effectiveness and ability to recognise so many things, is it’s adversarial in nature,” he said
.
“If you’re writing a simple computer program and you pass it along to someone else next to you, they can run many functional testing and integration testing against that code, and see exactly how that code behaves. “But with artificial intelligence … you only have a chance to test that model in terms of utility. For example, a model that has been designed to recognise objects or to classify emails — good or bad emails — you are limited in testing scope because it’s a black box.”

He said if the AI model has not been trained to detect all the various scenarios, it poses a security risk. “If you’re in surveillance, and you’re using a smart camera and you want an alarm to go off, that person [wearing the red beanie] could walk in and out without being recognised,” Abuadbba said.

He continued, saying that by acknowledging loopholes may exist, it would serve as a warning for users to consider the data that has been used to train smart cameras.
“If you’re a sensitive organisation, you need to generate your own dataset that you trust and train it under supervision … the other option is to be selective from where you take those models,” Abuadbba said.

Previous articleImpressive growth predicted for anti-drone market
Next articleGoogle discontinues its Nest Security Alarm System