Article

Artificial intelligence (“AI”) has become a transformative force in the employee benefit plan (“EBP”) community, offering significant advantages to recordkeepers and investment firms in transaction processing, data analysis, and investment decision-making.

While AI-powered solutions can bring advantages such as determining benefits, improved investment strategies, and personalized recommendations, they also come with inherent risks that need careful testing by fiduciary committees.

 
 

While AI offers exciting opportunities for optimizing benefit plan services, it also introduces inherent risks that cannot be ignored.

Potential pitfalls and challenges when vendors use AI 

When acquiring services from vendors that use artificial intelligence, it’s important to be mindful of potential pitfalls that can arise. Here are a few to consider:

  • Data Security and Privacy Concerns:  Data security and privacy are among the foremost concerns in the era of AI.  Pension, retirement, and health and welfare plans contain sensitive personal and financial information, making them prime cyberattack targets.  When AI-enabled vendors handle this data, the risk of breaches or unauthorized access increases.  Failing to secure these systems adequately can lead to severe financial and reputational damage for plan sponsors and participants.
  •  

  • Algorithmic Bias:  AI algorithms are only as good as the data that makes up their “training.”  If the training data is biased, the AI system can perpetuate and amplify those biases.  In the context of EBPs, this bias could result in unfair or unequal decisions about the qualification for benefits and investment recommendations.  For example, an AI system might favor specific investment options or strategies that are not in the best interest of all participants, leading to potential legal and ethical challenges.
  •  

  • Lack of Human Oversight: While AI can analyze vast amounts of data and make predictions, it lacks humans’ nuanced understanding and ethical judgment. Overreliance on AI in EBP services may lead to a lack of human oversight, potentially resulting in decisions that do not align with the best interests of plan participants.  Human intervention and expertise must ensure that AI-driven recommendations align with fiduciary responsibilities.
  •  

  • Regulatory Compliance:   The benefits industry is heavily regulated.  Implementing AI solutions in this space can pose significant challenges in terms of regulatory compliance.  Plan sponsors must ensure that their plans’ vendors adhere to policies and procedures that align with relevant standards and best practices.  Failure to validate these capabilities could lead to legal penalties and reputation damage for the employer and the fiduciaries who oversee the plans.
  •  

  • Cryptic and Murky:   The deep learning models AI algorithms use can be highly complex and difficult to understand.  They raise concerns about their decision-making processes due to the inherent lack of openness that accompanies them.  If plan participants cannot comprehend why certain benefits decisions, such as the denial of a withdrawal request or how investment recommendations were made, trust in the employer may be undermined.
  •  

  • Market Volatility and Uncertainty: Healthcare and investment markets are inherently volatile and unpredictable, and AI models can struggle to adapt to sudden and unexpected changes.  Using AI in servicing EBPs may result in overreliance on historical data and assumptions, which could lead to adverse results during participants’ health emergencies and investment market downturns.

Done right, vendor risk management reduces worries about AI and enables an upgrade in benefit plan outcomes and delivers calm assurance to the plan fiduciaries.

While AI offers exciting opportunities for optimizing benefit plan services, it also introduces inherent risks that cannot be ignored.  Data security and privacy concerns, algorithmic bias, lack of human oversight, regulatory compliance issues, cryptic algorithms, and market volatility all pose potential threats to the successful implementation of AI as an EBP servicing approach.

To mitigate these risks, plan fiduciaries must protect their participants and enterprises by assessing how their vendors use AI, vendors’ competence with that technology, and the appropriateness of security policies and procedures.

Roland| Criss’  third-party risk assessments are a valuable solution for examining EBP service providers’ AI applications.

Visit Us On TwitterVisit Us On Linkedin