Home

Industry and Academia Discuss AI Explainability and Its Immediate Impact in Expert Panel

Last week, leading experts from academia, industry and regulatory backgrounds gathered to discuss the legal and commercial implications of AI explainability. The industry and academia panel discussion, hosted by Professor Shlomit Yaniski Ravid of Yale Law and Fordham Law, brought together thought leaders to address the growing need for transparency in AI-driven decision-making.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20250304657219/en/

Industry and Academia Discuss AI Explainability and Its Immediate Impact in Expert Panel (Graphic: Business Wire)

Industry and Academia Discuss AI Explainability and Its Immediate Impact in Expert Panel (Graphic: Business Wire)

AI is becoming an essential tool in fields such as healthcare, finance, and law enforcement, yet its widespread use raises critical concerns about accountability. The discussion explored how explainability can bridge the gap between AI innovation, legal frameworks, and public trust.

Dr. Hanan Mandel, an internationally recognized expert in AI, opened the discussion by emphasizing the importance of ensuring AI operates within ethical and legal parameters. He stressed the need to "open the black box" of AI decision-making, making its processes understandable to regulators, businesses, and the public alike. "Explainability is not just about compliance—it’s about trust. Without clear reasoning behind AI decisions, we risk undermining both regulatory confidence and public acceptance," he said.

Regulatory Challenges and The New AI Standard ISO 42001

Tony Porter, former Surveillance Camera Commissioner for the UK Home Office and Corsight AI Chief Privacy Officer, provided insights into regulatory challenges surrounding AI transparency. He explained the significance of ISO 42001, the international standard for AI management systems, and how it provides a framework for responsible AI governance. "Regulations are evolving rapidly, but standards like ISO 42001 offer organizations a structured approach to balancing innovation with accountability," Porter noted. The discussion highlighted key differences in AI regulations between the US and Europe. While European regulators place significant emphasis on privacy and ethical considerations, the US regulatory landscape remains more fragmented, influenced by both commercial interests and state-level policies.

Industry Perspectives on AI Explainability

The panel featured representatives from leading AI companies who shared how their organizations implement transparency in their AI systems.

Daphne Tapia from ImiSight highlighted the importance of explainability in AI-powered image intelligence. She also emphasized the challenge of AI interpretability in high-stakes applications such as border security and environmental monitoring, where false positives and undetected anomalies can have significant consequences. ImiSight specializes in multi-sensor integration and analysis, utilizing AI/ML algorithms to detect changes, anomalies, and objects in various sectors, including land encroachment, environmental monitoring, border security, and infrastructure maintenance. "AI explainability means understanding why a specific object or change was detected. We prioritize traceability and transparency to ensure users can trust our system’s outputs," Tapia explained. ImiSight ensures its AI remains robust and reliable by continuously refining its models based on real-world data and user feedback. The company collaborates with regulatory agencies to ensure its AI meets international compliance standards.

Pini Usha from Buffers.ai shared insights on AI-driven inventory optimization. He noted that AI models must not only predict supply chain trends but also provide clear, understandable insights for non-technical users, ensuring businesses can act confidently on AI-driven recommendations. Buffers.ai helps businesses manage their supply chains by predicting demand, reducing waste, and optimizing logistics. "Transparency is key. If businesses cannot understand how AI predicts demand fluctuations or supply chain risks, they will be hesitant to rely on it," Usha noted. Buffers.ai integrates explainability tools that allow clients to visualize and adjust AI-driven forecasts, ensuring they align with real-time business operations and market trends. The company’s solutions help businesses improve efficiency while adhering to emerging data governance standards.

Matan Noga from Corsight AI discussed the role of explainability in facial recognition technology. Corsight AI specializes in real-world facial recognition, providing solutions to law enforcement, airports, malls and retailers. Their technology is used for watchlist alerting, locating missing persons, and forensic investigations. Corsight AI differentiates itself by focusing on high-speed, real-time recognition while ensuring compliance with evolving privacy laws and ethical AI guidelines. The company works closely with government and commercial clients to ensure responsible AI adoption.

Ensuring AI Transparency and Human Oversight in Legal Decision-Making

Alex Zilberman from Chamelio, a legal intelligence platform, addressed the role of AI in corporate legal teams. He pointed out that in legal decision-making, even the best AI systems must allow for human oversight, ensuring that legal professionals retain control over critical judgments rather than relying entirely on automated outputs. Chamelio transforms how in-house legal teams operate by using AI to extract critical obligations, streamline contract reviews, monitor compliance, and deliver actionable insights from vast legal document repositories. "Trust in AI-powered legal tools comes from transparency. Our system ensures legal professionals understand the source of every recommendation, avoiding the pitfalls of black-box AI," he said. Chamelio’s AI is designed to adapt to an organization’s specific legal frameworks, allowing seamless integration with existing workflows while maintaining compliance with regulatory standards. The platform prioritizes user control, allowing legal teams to verify and adjust AI-generated insights.

"How do you handle AI hallucinations?" Zilberman: "One example is when the algorithm identifies a need to modify an agreement based on logic derived from past agreements. It analyzes patterns, recognizes common clauses, and suggests adjustments accordingly. However, when the algorithm encounters a dilemma it has never handled before—such as a clause with no precedent or conflicting legal terms—it does not generate assumptions or fabricate responses. Instead, we instruct the model to flag the uncertainty and ask for human input, ensuring that legal professionals remain in control of critical decisions."

Balancing Commercial Demands and Regulation

One of the key themes of the discussion was how AI companies navigate the tension between commercial objectives and regulatory requirements. Panelists agreed that public-private partnerships are crucial in shaping policies that align with both business interests and ethical standards. Prof. Yaniski posed a key question to the panel: "Do you see different approaches between commercial and government AI applications? And what are the differences between the US and Europe in terms of AI regulation?"

Matan Noga responded: "The difference lies in regulatory compliance. In highly regulated industries like banks and casinos, every system must adhere to strict privacy, data protection, and many other standards. In less regulated sectors such as retail, the focus is ROI: when a retailer sees that live facial recognition significantly reduces shoplifting and employee fraud, it's a no-brainer. From our experience, police and intelligence agencies are less constrained by regulations than commercial entities." On geographical differences, Noga added: "In Europe and the UK, both government and commercial clients tend to prioritize privacy and fair use. In other parts of the world, these concerns are almost non-existent. In Latin America, for example, where violent crime is widespread, public opinion tends to prioritize personal safety over privacy." The debate also touched on regional differences in AI adoption. In Europe and the UK, privacy laws and ethical AI frameworks take precedence, whereas in Latin America and other regions with high crime rates, security considerations often outweigh privacy concerns.

The Path Forward

The panel discussion underscored the growing urgency of AI explainability as regulators worldwide seek to impose stricter oversight on AI-driven systems. Industry leaders, legal experts, and policymakers must work together to develop practical standards that ensure AI remains both effective and accountable. As AI continues to transform industries, transparency will be essential in fostering public trust and enabling responsible innovation. The discussion highlighted that while challenges remain, collaborative efforts between academia, regulators, and industry can drive meaningful progress in AI governance. Prof. Yaniski concluded the discussion by emphasizing the importance of ongoing dialogue between stakeholders. "AI explainability is not just a technical issue—it is a societal challenge that requires continuous engagement between regulators, industry leaders, and academia to ensure AI remains fair, accountable, and transparent."

Watch the full session here

Contacts