Article

Trustee boards and employee benefit plan committees must discern between fake and legitimate artificial intelligence (“AI”) enabled services as the hype for AI intensifies. To an increasing degree, vendors of critical infrastructure components like third-party administration, recordkeeping, health plans, investment management, and payroll tout the AI functionality of their offerings.

Continued monitoring of AI-enabled services begins with knowing where that technology exists in vendors’ offerings and whether it’s accommodated in their data security policies and practices.

Artificial Intelligence Enters the Mainstream

AI is nothing new. It appeared as a revolutionary technology with significant consequences for the employee benefit plan (” EBP”) community years ago. AI-powered tools and software have already revolutionized how administration and payroll firms function and manufacture their offerings by merely automating repetitive processes.

However, this time, the significance of the most recent advancement in AI-powered services goes beyond simple automation. When utilized responsibly, these tools provide expanded capabilities that can improve the job of EBP operations executives and oversight committee members.

Transforming Capabilities

Undoubtedly, expanded capabilities inherent in AI technologies are more accessible and present considerable prospects for increased efficiency in EBP-related operations.

Investment Decision-Making: In order to help EBP fiduciaries effectively assess investment ideas during the monitoring phase of their plans’ portfolios, natural language processing and natural language generation (NLP/G) can summarize and standardize structured and unstructured data from various sources. For example, AI evaluation tools that use NLP/G can test whether portfolio managers comply with their investing mandates.

EBP Administration: The most sophisticated use of AI would make use of personal information about each plan participant, such as investments, age, savings rate, and retirement asset goals, to prompt highly personalized interactions and decisions, like recommending the best withdrawal rate data based on the participant’s particular circumstances.

Payroll Operations: Effective payroll management necessitates routinely managing vast volumes of data. Effective coordination with other facets of human resource management, such as a 401(k) plan, is also necessary. The following are some ways AI can improve payroll function:

  • quicker and more precise employee classification for payroll calculation and tax band assignment;
  • improved communication, interaction, and query resolution using conversational AI chatbots; and
  • complete adherence to the changing regulatory environment.

Cybersecurity is an Unrelenting Challenge for AI Users

Employers and fiduciary committees will be affected by AI in a variety of ways, including potential benefits and adverse effects. AI can transform EBP procedures that improve participant outcomes, elevate efficiency, and improve data reconciliation. However, worries about data privacy and security risks underscore the substantial obstacles to obtaining the advantages.

Most sponsors of EBPs extensively use third parties for investment products, administration services, health and welfare plans, and a host of others that impact operations, including payroll, asset custody, and actuarial. Each vendor accesses and retains PII or PHI to one degree or another, thus exposing plan information and data to cyber attacks external to the sponsor’s environment and the ability to protect them directly.

In the wake of introducing cybersecurity best practices by the Employee Benefits Security Administration (“EBSA”), many employers adopted internal control frameworks that focus heavily on the policies and procedures of the EBPs’ vendors. While the EBSA’s guidelines are valuable, they lack specificity about AI technologies.

Include AI in Your Cybersecurity Assessments

The assessment methodology Roland|Criss uses for testing the authenticity of vendors’ AI claims starts with asking them to describe the analytical model employed in their AI solutions. We evaluate their responses against several criteria in order to determine how their AI-enabled components affect the confidentiality and privacy of the plan’s PII and PHI.

We also require any service provider claiming their offering has artificial intelligence must demonstrate how it would benefit the plan and its participants more than versions without AI.

In contrast to other technological products, AI systems are dynamic and necessitate providers’ total commitment to enhancing their adaptability and robustness. Continued monitoring of AI-enabled services begins with knowing where that technology exists in vendors’ offerings and whether it’s accommodated in their data security policies and practices.

Roland|Criss offers cybersecurity assessments with advanced AI criteria for fiduciary committees and trustee boards.

Visit Us On TwitterVisit Us On Linkedin