News

AI Security Pact: US, UK, and Other Countries Sign 'Secure by Design' Agreement

AI Security Pact: US, UK, and Other Countries Sign 'Secure by Design' Agreement

Austin Jay

The first comprehensive international agreement to protect artificial intelligence (AI) from misuse was unveiled, involving the United States, Britain, and more than a dozen other countries. This effort pushes businesses to design AI systems with security features included.

The agreement, contained in a 20-page document made public on Sunday, highlights the duty of AI creators and users to protect the public. It is non-binding, but it emphasizes necessary steps like safeguarding data, checking software vendors, and keeping an eye out for the exploitation of AI.

AI Securty Pact
(Photo : Unsplash/ Igor Omilaev)

Global Initiative Highlights AI Security as Top Priority

The director of the USA Department of Homeland Security, Jennifer Easterly emphasized the need to ensure safety in artificial intelligence applications.

She pointed out that this makes the security aspect a prime consideration rather than innovation and market competitiveness in the design phase.

Adding that this is another successful foreign program in shaping the direction for artificial intelligence use with growing influences on various markets.

It was signed by Australia, Germany, Italy as well as others, stressing the highest priority for any technology in providing security for AI products.

In an attempt to combat cyber threats directed at AI, a recent framework stresses for full security assessments in the pre-release of AI models. The project primarily focuses on data security, though it fails to specifically address the complex ethical use of AI and the moral data mining in these systems.

There are worries that as the artificial intelligence domain advances there is a probability of abusing it. Such concerns involve fear of undemocratic process, increasing fake activities among others, which may cause many job loses.

This is on the issue of AI legislation whereby Europe has gone beyond the US.

Europeans politicians are currently working towards creating laws aimed at regulating artificial intelligence (AI) technologies. They have recently agreed with France, Germany, and to an extent, Italy in regulating the basic AI models through "mandatory self-regulation" codes of conduct.

While the Biden administration has pushed for laws on AI, politics in the U.S. have made it difficult to get much done on this.

Therefore, last month the White House came up with an Executive Order that aims at tightening the security systems regarding AI impacts on its users-workers, consumers, and minority populations.

Also Read: Top AI Image Generators: Explore the Best Tools for Creating Images

Apple Continues to Develop AI Reach

Over the years, Apple product especially the latest iPhone photography has incorporated AI functionalities actively. They still run Apple GPT, an in house chat bot but probably have to keep product design secure with every new generative AI capability that may be used to improve software developments.

The strategy employed by Apple in the uptake process may signal a delayed consumer introduction to equivalent developments. In the business, tests are conducted vigorously and the highest standard of reliability and consumer data protection is maintained.

Related Article: AI Soars as Companies Achieve Unprecedented Success in Pioneering Week

© Copyright 2020 Mobile & Apps, All rights reserved. Do not reproduce without permission.

more stories from News

Back
Real Time Analytics