In its
new brief, the global Technology Policy Council of the Association for Computing Machinery (ACM) emphasizes the need to reduce the risks associated with algorithmic systems. The council warns about these risks not being addressed despite widespread usage.
What practical steps can organizations take to reduce risks associated with algorithmic systems?
The council provides practical steps organizations can take to reduce the risks and make algorithmic systems safer.
- Research. The brief says that algorithmic systems have risks built into them and that more attention and research are needed to deal with these risks. It recommends studying human-centered software development practices, testing, audit trails, monitoring, training, and governance to identify ways to reduce risks.
- Shift focus from AI ethics to safer algorithmic systems. The lead author of the brief, Ben Shneiderman, suggests shifting the focus from AI ethics to safer algorithmic systems. He emphasizes the need to change our focus on what we do and how we make these things practical.
- Create a human-centered safety culture. The ACM brief suggests that a safety-focused culture, incorporating “human factors engineering,” must be established by organizations and integrated into the algorithmic system design to prioritize safety.
- Implement safeguards and review. Shneiderman emphasizes the need for governments and organizations to give equal importance to implementing safeguards or security measures as they do to reviewing new products or pharmaceuticals before releasing them to the public.
- Adopt safety-related practices. The brief suggests ways to enhance the safety of algorithmic systems, such as using “red team” tests and “bug bounties.” These methods have worked well to improve cybersecurity and can help reduce the risks of algorithmic systems.
Shneiderman says that these efforts can give businesses a competitive edge. He says that investing in safety and a culture of safety is not only a moral obligation but also a wise business decision.
Why embrace a holistic approach to risk management?
Embracing a holistic approach is a wiser business decision. By following a holistic approach, organizations can achieve greater resilience and survive in a highly volatile market. This approach, as
McKinsey suggests, shifts the focus from a narrow view of risk and controls to a broader, strategic view of the entire environment.
According to McKinsey, resilient organizations embrace the holistic view rather than hunt for blind spots in risk coverage within today’s business model. The focus is not only on ethics, risks, or safety. Resilience is also important.
Like cybersecurity, AI is constantly evolving and changing. Businesses need to do more than just “react to safety risks.” They need an encompassing approach to achieve technological, operational, financial, organizational, reputational, and business-model resilience.
The crux of the matter is AI Transparency and Trust
Shneiderman compares the creation of safer algorithmic systems to the development of civil aviation. Civil aviation is considered safe despite its inherent risks.
People consider something safe if they trust it. And people will only trust something if it is transparent. They need proof to believe and be confident that it is safe indeed.
Unlike civil aviation, however, AI is a black box. We cannot see its “wiring” with the naked eye. We cannot simply open the hood, identify what is wrong, and fix it.
With a solution that operationalizes AI Trust, companies can achieve confidence in AI. This is where Konfer comes in.
Konfer’s holistic solution operationalizes AI Trust
Konfer, the end-to-end AI Trust Management System leader, enables resilience that stems from trust and confidence. Konfer empowers organizations to build a culture of robustness, transparency, and security. It uses a holistic approach to operationalizing trust of AI systems throughout their end-to-end lifecycle.
Konfer provides a mechanism that empowers all stakeholders to map their AI systems automatically and visualize what is “under the hood” into a crystal-clear map of their AI ecosystem. Stakeholders can use this map to pinpoint potential issues and proactively manage their business metrics to achieve their desired results.
Konfer goes beyond monitoring and measuring AI risks. It provides an all-encompassing strategy by giving companies powerful capabilities, including
KonferKnowledge, KonferConfidence, and KonferTrust.
Conclusion
It is mission-critical to reduce the risks associated with algorithmic systems. But, as McKinsey suggests, the key to becoming more resilient is to take a holistic approach.
Konfer offers a holistic solution that operationalizes AI Trust. Konfer provides a mechanism for stakeholders to understand the inner workings of AI systems. It offers advanced capabilities to map AI systems, identifies potential dangers and problems, and addresses challenges beyond conventional solutions.
If you want to learn more about Konfer’s holistic solution, please reach out to us at
konfer.ai.