Microsoft Counterfit Tool for Testing the Security of AI Systems Released as an Open Source Project

Microsoft Counterfit, which can help organizations evaluate the safety of their technology (AI) and machine learning (ML) programs, has been released as an open source tool. Microsoft says the tool can test whether the algorithms used in AI and ML systems are “robust, reliable, and reliable”.

The Redmond-based company says it uses Conterfit internally to evaluate its AI plans for risk before launching it. Microsoft will be hosting the Counterfit and live session on May 10.

According to a post on Microsoft’s blog, Fraud is a tool to protect AI applications used in various industries such as health care, finance and security. Citing a survey of 28 organizations, including Fortune 500 companies, governments, nonprofits, and small and medium enterprises (SMBs), Microsoft said it found that 25 out of 28 businesses lacked the necessary tools to protect their AI systems.

“Consumers need to be confident that the AI ​​systems that enable these important domains are protected from the deceptions of opponents,” the blog reads.

Microsoft says it has partnered with a variety of partners’ profiles to test the tool against their machine-based learning environment and to ensure that Counterfit caters to a broader set of security personnel requirements. Counterfit is also presented as a tool to empower developers to safely develop and implement AI programs.

In addition to having a workflow with terms such as popular offensive tools, Fraud is said to have made published algorithms available to the public.

Microsoft also has a Counterfit *** Hub repository, and is hosting a live and learning session on May 10. If you are an engineer, or work for an organization that wants to use a tool to protect AI applications, you can sign up for a webinar.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top