Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

Proactively Prepare for AI Regulation: Report

Mounting Regulation Will Add Complexity to Compliance, Says KPMG
Proactively Prepare for AI Regulation: Report
Image: Shutterstock

Regulatory scrutiny over artificial intelligence will only mount, warns consultancy KPMG in a report advising companies to proactively set up guardrails to manage risk.

See Also: Accelerate innovation with Copilot+PCs- fastest, most intelligent Windows PC

Even in the absence of regulatory regimes, "companies must proactively set appropriate risk and compliance guardrails" that include slowing down AI system deployment for vetting, said Amy Matsuo, a KPMG principal, in a report published Tuesday.

Reputational risk alone means companies should react swiftly to security risks such as data poisoning, insider threats and model reverse engineering, the firm said (see: UK Cyber Agency Warns of Prompt Injection Attacks in AI).

The European Union is close to finalizing continentwide rules on the deployment of AI systems, and the U.S. federal government has said that existing laws against discrimination and bias apply to algorithmic processes. Lawmakers in the United Kingdom have urged the prime minister to articulate a comprehensive AI policy (see: Mitigating AI Risks: UK Calls for Robust Guardrails).

Future regulation could address unintended consequences of AI systems, transparency, limits on access to consumer information and data safeguards, KPMG said. Mounting regulation "will add complexity to the intensifying interagency and cross-jurisdictional focus on AI," making compliance increasingly complex to achieve, the company said.

KPMG suggests an enterprisewide approach that covers the entire AI life cycle. "Engage with internal stakeholders throughout the AI life cycle to improve enterprisewide capacity for understanding AI, including benefits, risks, limitations and constraints; check assumptions about context and use; and enable recognition of malfunctions, misinformation, or misuse."

The advisory firm also said firms should implement a "culture of risk management" during the design, development, deployment and evaluation of AI systems. To do this, companies must focus on developing policies that manage how AI is used in the organization and by whom, educate stakeholders on the emerging risks and appropriate use policies, monitor regulatory developments and ensure that they are complied with.

In the United States, in addition to the government's policy statement that it will use existing legal authorities to combat bias in automated systems, the White House published a blueprint for an AI bill of rights, and the National Institute of Standards and Technology released a voluntary AI risk management framework. The Securities and Exchange Commission has proposed new rules on conflicts of interest provoked by the use of predictive data analytics. The Biden administration in April solicited comments on mechanisms such as audits, reporting, testing and evaluation that could create an "AI accountability ecosystem" (see: Feds Call for Certifying, Assessing Veracity of AI Systems).


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing databreachtoday.in, you agree to our use of cookies.