Home Startups Protect AI secures $13.5M investment to protect AI projects from attacks • businessroundups.org

Protect AI secures $13.5M investment to protect AI projects from attacks • businessroundups.org

by Ana Lopez
0 comment

In an effort to better secure AI systems, Protect AI today raised $13.5 million in a seed funding round co-led by Acrew Capital and Boldstart Ventures with participation from Knollwood Capital, Pelion Ventures and Aviso Ventures. Ian Swanson, the co-founder and CEO, said the capital will be used for product development and customer outreach as Protect AI emerges from stealth.

Protect AI claims to be one of the few security companies that focuses entirely on developing tools to protect AI systems and machine learning models from exploits. The product suite aims to help developers identify and remediate AI and machine learning security vulnerabilities at various stages of the machine learning lifecycle, Swanson explains, including vulnerabilities that could expose sensitive data.

“As the use of machine learning models grows exponentially in production use cases, we see AI builders needing products and solutions to make AI systems more secure while recognizing the unique needs and threats surrounding machine learning code,” Swanson said. businessroundups.org in an email interview. “We have researched and discovered unique exploits and are providing tools to mitigate the risks inherent [machine learning] pipelines.”

Swanson launched Protect AI about a year ago along with Daryan Dehghanpisheh and Badar Ahmed. Swanson and Dehghanpisheh previously worked together at Amazon Web Services (AWS) on the AI ​​and machine learning side of the business; Swanson was the global leader on AWS’s AI team for customer solutions and Dehghanpisheh was the global leader for machine learning solution architects. Ahmed met Swanson while working at Swanson’s latest startup, DataScience.com, which was acquired by Oracle in 2017. Ahmed and Swanson also worked together at Oracle, where Swanson was the VP of AI and machine learning.

Protect AI’s first product, NB Defense, is designed to work within Jupyter Notebook, a digital notebook tool popular among data scientists within the AI ​​community. (A 2018 GitHub analysis found that there were more than 2.5 million public Jupyter notebooks in use at the time of the report’s publication, a number that has almost certainly risen since then. , libraries, and frameworks needed to train, run, and test an AI system — for security threats and offers suggestions for remediation.

What kind of problematic elements can an AI project notebook contain? For example, Swanson suggests authentication tokens for internal use and other credentials. Note Defense also looks for personally identifiable information (eg, names and phone numbers) and open source code with a “non-permissive” license that could prohibit its use in a commercial system.

Jupyter notebooks are mostly used as notepads rather than production environments, and most of them are securely locked from prying eyes. According to according to an analysis by Dark Reading, less than 1% of the approximately 10,000 copies of Jupyter Notebook on the public web are configured for open access. But it is true that the exploits are not only theoretical. Last December, security firm Lightspin uncovered a method that allows an attacker to execute any code on a victim’s notebook on accounts on AWS SageMaker, Amazon’s fully managed machine learning service.

Other research firms, including Aqua Security, have found that improperly secured Jupyter notebooks are vulnerable to Python-based ransomware and cryptocurrency mining attacks. In a Microsoft questionnaire of companies using AI, the majority said they don’t have the right tools to secure their machine learning models.

It may be premature to sound the alarm bells. There is no evidence that attacks are happening on a large scale, Gartner notwithstanding report predicting an increase in AI cyberattacks by the end of this year. But Swanson argues that prevention is key.

“[Many] existing security code scanning solutions are not compatible with Jupyter notebooks. These vulnerabilities, and many more, are due to a lack of focus and innovation from today’s cybersecurity solution providers, and are the biggest difference for Protect AI: real threats and vulnerabilities that exist in AI systems today,” said Swanson.

In addition to Jupyter Notebooks, Protect AI will work with widely used AI development tools, including Amazon SageMaker, Azure ML and Google Vertex AI Workbench, Swanson says. It’s available for free to start with, with paid options being introduced in the future.

“Machine learning is… complex and the pipelines that deliver machine learning at scale create and multiply cybersecurity blind spots that bypass current cybersecurity offerings, preventing key risks from being adequately understood and mitigated. In addition, emerging compliance and regulatory frameworks continue to drive the need to strengthen AI systems’ data sources, models and software supply chain to meet increased governance, risk management and compliance requirements,” Swanson continued. “Protect AI’s unique capabilities and deep expertise in the machine-based enterprise lifecycle and AI at scale helps enterprises of all sizes meet today’s and tomorrow’s unique, emerging and increasing requirements for a safer, more secure, AI-powered digital experience.”

That promises a lot. But Protect AI has the advantage of entering a market with relatively few direct competitors. Perhaps the closest is Resistant AI, which develops AI systems to protect algorithms from automated attacks.

Protect AI, which is pre-revenue, does not reveal how many customers it has today. But Swanson claims the company has secured “Fortune 500 companies” across industries, including finance, healthcare and life sciences, as well as energy, gaming, digital companies and fintech.

“As we grow our customers, build partners and value chain participants, we will use our funding to add additional team members in software development, engineering, security and go-to-market roles in 2023,” said Swanson, adding that Protect AI’s workforce stands at 15. “We have several years of cash runway available to further develop this field.”

You may also like

About Us

Latest Articles