The rapid pace of innovation of Artificial Intelligence (AI) has unearthed unprecedented opportunities and transformative changes across industries. From revolutionizing healthcare to optimizing logistical networks, AI's potential appears boundless. However, with that growth has come growing unease with AI’s capabilities, and who is actually monitoring and assuming responsibility for its development and deployment.
In the field of employee benefits, organizations are utilizing large quantities of personal information and AI functions to create innovative and relevant solutions. The rapid proliferation of generative and agentic AI systems introduces new layers of complexity and risk, and amplifies the urgency for robust frameworks that adhere to the ethical, regulatory, industry and societal requirements – necessitating new rules, policies and practices.
Employee benefits brokers and consultants are on the front lines, continually hearing from employers: "How can my employees trust technology, especially AI, as it continues to advance?" when considering the investment in benefits for their people. KPMG research shows that 61% of people are wary about trusting AI systems overall, with AI use in Human Resources is cited as the least trusted and accepted application.
But, not all AI is created equal, and not all applications of AI are equal. PwC research found that companies investing in robust Responsible AI (RAI) programs achieve levels of trust up to 7% higher from the public and employees, and can even see revenues up to 3.5% higher than companies that only invest in compliance. Furthermore, RAI leaders significantly reduce the risk and severity of negative AI incidents.
For benefits and HR technology companies, RAI practices are critical for managing privacy, safety, fairness, accountability and transparency, while reducing risks such as potential misuse or discrimination with the vast amounts of protected health and personally-identifiable information. Three of the most prevalent areas of concern for risk mitigation are:
When RAI is implemented, employers actually place high trust in AI-driven recommendations: A recent study by The Hartford's Future of Benefits found that 71% of employers trust AI to make benefits recommendations for their employees. This high level of confidence makes it critical that the underlying AI is trustworthy, transparent, and ethical.
To move past skepticism, employers and employees need solutions from companies that prioritize ethical development and transparency. In other words, RAI is non-negotiable, and the unlock is AI governance.
Governance means measuring and addressing factors like model and algorithmic bias — decisions that could negatively impact the user — as well as enabling model explainability and transparency around the systems and human factors that influence AI model development. Uphold the practices and decisions that drive quality and performance checks into the model development lifecycle, contributing to positive outcomes for all stakeholders.
By implementing a consistent and coherent approach to governance, guided by technical and industry standards like the NIST AI Risk Management Framework (RMF), AI solutions can confidently provide the best products and outcomes to the market while knowing that the algorithms in the modeling systems are fair, transparent, and unbiased. The impact of following responsible AI practices through strong governance also extends across internal teams, improving process efficiencies and reliability.
Jon Douglas, Head Actuary at Nayya, describes the intentionality of their governance efforts: “As pioneers in implementing effective governance in our systems, we have prioritized the implementation of Responsible AI practices in all our modeling systems. We are continuously investing in ways to monitor and enhance safety, resilience, and reliability. We are well-equipped to offer the most cutting-edge advancements with a guarantee of trust so that we have mutual confidence with every single stakeholder, customer, and user of our solutions.”
AI providers should be able to confidently bring innovative solutions to the market, knowing that there are multiple layers of visibility and accountability to maintain quality and mitigate risks. Plan sponsors, in turn, can then trust that they are using solutions that have been thoroughly vetted, and can effectively communicate the critical steps taken into consideration for their workforces – bringing the trust and confidence full-circle when it comes to AI.
Therefore, governance is not merely an optional add-on; it is an essential component for realizing the full potential of AI while mitigating its inherent risks. By proactively establishing comprehensive ethical guidelines, robust regulatory frameworks, and fostering open dialogue, we can navigate the complexities of the AI revolution and ensure a future where we maximize human thriving through better information delivery and use.
According to Gartner, by 2027, 75% of AI platforms will integrate AI governance as a key competitive advantage.That means finding solutions that go beyond mere compliance, and pay off in terms of measurable business value. This makes it a critical decision to choose AI partners with ongoing commitments to Responsible AI practices, with whom you can collaborate and foster shared learning and common guidelines.
Nayya’s strategic decision to partner with Monitaur to embed superior AI governance and external accountability throughout the employee benefits experience reinforces its commitment to rigorously assessing and mitigating risk exposures, and ensuring transparency and trust in every solution we deliver.