Corporate Report Highlights Security Gaps in Enterprise Generative AI Adoption

OpenText Corporation has released a comprehensive global report in collaboration with the Ponemon Institute that sheds light on the security and governance challenges confronting enterprises as they adopt generative artificial intelligence (AI). Drawing on data from a worldwide survey of information technology and security professionals, the study reveals that while the majority of organizations are actively deploying AI tools, the foundational security and governance controls required for responsible AI use remain largely absent.

Key Findings

  • Limited AI Maturity in Cybersecurity – Only a minority of respondents reported that their AI systems were fully mature in the context of cybersecurity. Current deployments are frequently disjointed from risk assessment frameworks and compliance controls, leaving critical gaps in protection.
  • Model Bias, Privacy, and Output Integrity – A significant proportion of organizations struggle to manage model bias, safeguard data privacy, and prevent misleading outputs from AI systems. These weaknesses directly impede the effectiveness of AI-driven threat and anomaly detection, where human oversight remains indispensable.
  • Risk of Trust and Compliance Erosion – As AI systems become increasingly autonomous, the disparity between rapid adoption and robust security practices risks undermining stakeholder trust and regulatory compliance.

Executive Perspective

OpenText executives emphasized the necessity of embedding transparency, policy‑based controls, and continuous monitoring into AI architectures from the outset. They urged organizations to align AI initiatives with secure information‑management and governance frameworks to unlock business value responsibly. The company reiterated its leadership role in secure information management and highlighted its suite of solutions designed to help enterprises govern, protect, and activate data for AI applications.

Implications for the Corporate Landscape

The report underscores a broader trend in the technology sector: the need for cross‑functional collaboration between data science, cybersecurity, and compliance teams. Enterprises that establish mature AI governance models are better positioned to:

  1. Reduce Operational Risk – By integrating AI with established risk management protocols, firms can mitigate the likelihood of misclassification or data breaches.
  2. Enhance Regulatory Compliance – Structured AI governance supports adherence to evolving data‑protection regulations such as GDPR, CCPA, and emerging AI‑specific legislative frameworks.
  3. Strengthen Competitive Positioning – Organizations that demonstrate robust AI security can differentiate themselves in markets where trust and data integrity are critical to customer acquisition and retention.

Broader Economic Context

The findings also resonate with macro‑economic factors shaping corporate strategy. In an era of digital transformation accelerated by remote work, supply‑chain disruptions, and heightened cyber‑threats, the ability to deploy AI securely is becoming a key determinant of operational resilience. Investors are increasingly scrutinizing AI governance as part of enterprise risk assessments, while regulatory bodies are tightening oversight around AI-driven decision-making.

By providing actionable insights, OpenText’s partnership with the Ponemon Institute offers a roadmap for companies seeking to bridge the gap between AI innovation and security rigor. The expectation is that broader adoption of mature governance frameworks will lead to safer, more reliable AI deployments across critical operations, ultimately fostering trust and sustaining competitive advantage in a rapidly evolving digital economy.