Artificial intelligence is the buzzword of our time, and technologists say all organizations need to leverage it to stay competitive. But what if deploying AI makes us even more vulnerable to harmful cyber-attacks? This is a concern for a growing number of cybersecurity experts, and the nature of AI tools and applications, including generative AI platforms, makes them even more vulnerable to harmful cyberattacks. It warns that there are blind spots that fraudsters and criminals can exploit, with dire consequences.
SydeLabs, a San Francisco-based startup founded earlier this year to address this very problem, today announced the successful completion of a $2.5 million seed funding round. “The rise of AI has made it imperative to think more deeply about how to prevent fraud in AI applications,” says Ankita Kumari, who co-founded the business with Ruchir Patwa. “In particular, the number of attackers targeting AI applications is beginning to increase significantly.”
She is not the first to sound the alarm. Verdict, a research expert, identified the nascent cybersecurity solutions market aimed at protecting AI applications as a market to watch last summer, especially as regulation of AI tools and products continues to increase. suggested as. Groups such as Stack Overflow have warned that AI applications are providing attackers with an increasingly large attack surface. For example, they could interfere with decision-making by AI models or access sensitive data on which these models were trained.
SydeLabs has developed two products that give businesses and other AI users an opportunity to fight back. His Sydebox solution at the company, currently used by about 15 early adopter customers, enables organizations to scan AI applications to identify and address vulnerabilities that attackers could exploit. Masu. According to Kumari, organizations using the software have already discovered more than 15,000 potential weaknesses in the 50 different applications they have deployed.
The company's second application, SydeGuard, will be released in the coming weeks and will provide organizations with a means to detect live attacks against AI systems. The software works by assigning a risk score to each interaction with the system. Organizations can set thresholds to be notified of such interactions and take action accordingly.
AI applications require a different approach than traditional methods of detecting cyber threats, Kumari says. “Traditionally, security has relied mostly on pattern-based approaches to detect both vulnerabilities and attacks,” she says. “That approach doesn't work for generative AI applications, where user intent is more important than precise user input.”
Kumari said major cybersecurity product providers are aware of this issue, but find it difficult to respond as quickly as necessary given new issues arise in real time and rapidly evolving threats. It is said that there is. Smaller providers are more agile, she argues.
Therefore, several distribution channels are being considered and competition is on to become the product provider of choice. Kumari is keen to sell SydeLabs' products on a standalone basis to enterprise customers, but he also sees commercial potential in partnerships with established cyber and security players and his AI application designers. That's what I think.
SydeLabs' competitors include companies such as Lakera and Prompt Security, which also develop cybersecurity solutions designed specifically for AI applications.
SydeLabs is therefore keen to continue innovating at pace, with plans to launch a third product in the coming months to help organizations identify compliance gaps as regulations tighten. .
Today's funding will provide immediate support for this innovation, with new capital dedicated to research and development. The round was led by RTP Global, with participation from Picus Capital and a number of angel investors.
“SydeLabs is setting a new standard for AI applications that are both groundbreaking and secure,” said Galina Chifina, Partner in RTP Global's Asia Investments team. “Her SydeLabs approach to AI security exemplifies advanced applications of the technology we champion at RTP.”