As artificial intelligence (AI) adoption – and headlines about new AI developments – surge, corporate directors are seeking detailed answers to deep questions about AI governance and risk management practices.
“Twelve months ago, most boards were just beginning to have conversations among themselves and with management about AI governance,” according to a recent Grant Thornton article. “After the rapid AI advancements in 2024, there is now a tangible urgency to the questions boards are asking about what’s next in AI governance.”
Tax leaders whose teams are using or planning to use AI tools (e.g. easy-to-use conversational interfaces that deliver rapid responses to tax automation questions ) should be prepared to respond to these queries.
CEOs have their own questions about AI governance. Approximately a third of CEOs report low trust in the deployment of AI solutions and another third report only moderate trust, according to PwC’s 28th Annual Global CEO Survey. This ‘trust gap’ requires attention given the potential value that AI solutions can help organisations generate.
“In a future where productivity gains will be mere table stakes in the race to leverage AI, trust in the tech will be essential to maximising its potential – and avoiding obsolescence,” according to PwC, whose Global CEO Survey findings support this point: chief executives who report having a high level of trust in their AI solutions also report higher gains in efficiency, revenue and profitability; their companies also tend to notch more progress integrating GenAI tools into tech platforms, business processes and workflows.
The Grant Thornton report on AI governance discusses guardrails that should be in place when these tools are implemented. These mechanisms include a clearly articulated AI strategy, a structured governance framework, stringent security protocols, policies that define acceptable use, rigorous quality management processes and comprehensive training programmes. “Maintaining data quality, privacy and security is one of the biggest challenges leaders face related to the technology,” according to Grant Thornton.
I strongly agree with that assessment. The prioritisation of data security and privacy is one of the guiding principles Vertex adheres to as we develop and refine AI functionalities and solutions, like Vertex Copilot. As my colleague, Vertex Vice President of Technology Strategy Chris Zangrilli, wrote back in 2023, “AI relies on data, and we already have high standards in place to make sure our customers’ information remains protected and private. Since we have these existing control frameworks in place, we are taking those high standards forward and responsibility utilising them throughout the progression of AI’s journey in tax software.”
Vertex’s AI journey continues to make notable progress (you’ll hear more about our evolving AI capabilities soon), as does our commitment to empowering our customers to benefit from AI’s game-changing potential while maintaining control and data security.