Posted October 28, 2022

The EU Wants To Hold Businesses Accountable For Harmful AI — Here’s What You Can Do

The EU Wants To Hold Businesses Accountable For Harmful AI — Here’s What You Can Do

If consumers can demonstrate that a corporation’s AI caused them harm, a new bill will permit them to sue for damages.

How can your company prepare for these new AI regulations?

EU Is Creating New AI Regulations

To make it simpler to sue AI businesses for damages, the EU is developing new regulations. The new bill, known as the AI Liability Directive (AILD), was proposed in September 2022 and is expected to become law in a few years.

To make it simpler to sue AI businesses for damages, the EU is developing new regulations. The new bill, known as the AI Liability Directive (AILD), was proposed in September 2022 and is expected to become law in a few years.

The new legislation would authorize individuals and entities to pursue legal action for damages if an AI system caused them harm. The objective is to hold AI developers, producers, and users accountable and oblige them to disclose the development and training processes used to create their AI systems. Tech corporations that violate these rules might face class actions across the EU.

Tech Giants Complain It Will Limit Innovation

Tech companies are likely to exert significant pressure on their behalf because they believe such regulations will inhibit innovation.

The measure might have a negative effect particularly on software development, according to Mathilde Adjutor, Europe’s policy manager for the tech lobbying group CCIA, which works on behalf of businesses — including Google, Amazon, and Uber. She says that under the new regulations, developers not only run the possibility of being held accountable for software flaws, but also for the software’s potential impact on users’ mental health.

Consumer Groups Claim It Doesn’t Go Far Enough

The measure will rebalance power in favor of consumers rather than businesses, says Imogen Parker, associate director of policy at the Ada Lovelace Institute, a research center for artificial intelligence.

Some consumer rights groups and campaigners, however, think otherwise. The plans, according to them, don’t go far enough and will place an unfairly high burden on customers who wish to file claims.

The plan is a “real letdown,” according to Ursula Pachl, deputy director general of the European Consumer Organization. She says it places the burden of proof on consumers to demonstrate that an AI system injured them or that an AI developer was careless.

It will be hard for the consumer to employ the new guidelines in a world of highly cryptic and intricate “black box” AI systems, according to Pachl. She added that it would be very challenging to demonstrate that someone was the victim of racial discrimination as a result of the design of a credit scoring system.

Why should businesses care about these sentiments? Because AILD can intensify consumers’ distrust of AI and influence their adoption of the technology.

What You Can Do Now To Prepare For AILD

It will take at least two years for the AILD draft to make its way through the EU’s legislative process. For the time being, attention should be paid to improving visibility.

Only by having complete and granular visibility can your organization manage compliance and mitigate risks with confidence.

According to McKinsey, companies fail to manage AI risks because they cannot address the full spectrum of risks they are exposed to. They are also unaware of the extent of their exposure to AI risks.

The underlying issue is a lack of visibility and transparency, which is exacerbated by low-code and no-code AI solutions. Lay business people can integrate them into their processes on their own, leaving concerned units like IT and compliance in the dark.

With Konfer Confidence Cloud, you can set the visibility in place while the EU AI rules are being finalized.

How Konfer Enhances Visibility

Konfer provides a verification, visibility, and trust management framework which you can leverage to pivot and create the right guardrails once the regulations are formalized. You can then adjust the framework to capture the specifics with a complete and granular view of the organization’s AI ecosystem.

Konfer Confidence Cloud also increases AI transparency by automatically collecting all the dynamic information generated as the applications drive business results. It automatically discovers, aggregates, and builds the relationships between the assets, allowing all stakeholders to obtain all information about their AI ecosystem, including.

  • which AI solutions exist across the enterprise,
  • where each AI solution is located,
  • who uses them,
  • how users utilize them, and
  • which business metrics and KPIs they influence.

Konfer offers a rich set of capabilities, including KonferKnowledge. KonferKnowledge is the repository of all AI software assets and associated attributes — including AI/ML applications, models, feature stores, and data. The KonferKnowledge graph is the single source of truth about individual AI applications.

With Konfer, it will be easier for your organization to share information on how your company develops and trains your AI systems, when needed.

Using Konfer encourages customers to trust your AI products and be confident in using them, knowing that there is a systematic, standardized, and simple process to gather evidence should AI harm them.

Furthermore, AI compliance officers can also take advantage of Konfer’s trust management system to simplify verification, documentation, and other tasks they need to accomplish to ensure compliance.

Achieve AI Confidence with Konfer

Regardless of its outcomes, the AILD will have direct and indirect effects on your business and customers. But with Konfer Confidence Cloud, you can create a verification, visibility, and trust management framework and increase regulatory confidence, knowing that your AI is transparent and trustworthy.

Do you want to see how Konfer works? Request a demo today!

Other Blogs

Konfer-ai-resources-blog-strengthening-ai-risk-management-governance-visibility

Posted June 2, 2023

Organizations are looking to accelerate their adoption of artificial intelligence […]

How To Elevate Traditional AI/ML Development Life Cycles Into Trusted AI/ML Development Life Cycles

Posted February 28, 2023

Artificial intelligence and machine learning (AI/ML) follows a cyclical process […]

Konfer-ai-resources-blog-risk-to-resilience-prioritizing-ai-compliance

Posted September 28, 2023

Artificial Intelligence is reshaping the business landscape, bringing unparalleled opportunities […]

Konfer-resources-blog-managing-ai-risk-governance-visibility

Posted May 4, 2023

The key to managing AI risk, governance, and visibility lies […]

Contact Us

Interested in operationalizing trust in your AI applications? Ping us here and Konfer's experts will set up a demo to show you how it can be done.