Unlocking AI's potential in asset and wealth management: Three actions you can take now

  • Insight
  • 10 minute read
  • January 23, 2025

AI’s rapid advances put asset and wealth management (AWM) firms at a crossroads — with the size of the firm often dictating how ready they are to reinvent themselves. Larger firms are often roaring ahead: modernizing data, upskilling their teams, implementing AI-driven workflows, and acting to make AI intrinsic in almost everything they do. Research, investor relations, knowledge management and software development are just some areas where AI is delivering value to larger managers.

In contrast, many small- and medium-sized managers are adopting GenAI cautiously. Budget limitations, deferred data governance, outdated technology stacks, talent limitations and less formalized governance strategies can hinder their ability to innovate swiftly and safely.

Yet, these firms can still achieve value from AI-powered productivity tools and licensable point solutions. These are readily available in core platforms for portfolio and investment management and customer relationship management, and in cyber security and ERP systems. Despite their differences, both large and small managers typically need to tackle several challenges as they take concrete steps to advance with AI.

Key challenges facing asset and wealth managers

From our work helping implement AI in AWM firms, we have found some shared challenges and obstacles to adopting and scaling the technology. Overcoming them can open the door to rapid value creation.

Misaligned stakeholders

Managers often have conflicting internal expectations for AI. This misalignment, along with gaps in understanding AI's potential, can lead to fragmented and isolated solutions, which typically increases costs and reduces ROI. In smaller, partner-led firms, decision-making often reflects diverse stakeholder perspectives and direct financial trade-offs which can make aligning on longer-term AI priorities particularly challenging.

Longstanding governance and risk vulnerabilities

AI introduces new operational, data, compliance and enterprise risks for both internally developed AI initiatives and those from third-party providers. Fiduciary obligations and data privacy requirements amplify these risks, particularly for smaller managers who often operate with limited risk management resources.  Risk governance and oversight modernization are needed in parallel with AI development programs. We have found that such modernization work often uncovers risks that managers were unaware of due to a lack of a holistic inventory of their current operational vulnerabilities, data quality issues or potential regulatory compliance gaps.

The skills gap

Your firm cannot be AI savvy if that knowledge is limited to a few programmers, data scientists and engineers.  Everyone — from investor and client relations to research teams and portfolio managers — should learn how to use AI responsibly: realizing what AI can and cannot do, and their role in the human-in-the-loop reviews of AI output. Non-technology personnel should also educate software developers about their professional needs and the restraints they should adhere to (including regulatory constraints). Such clear communication of needs and guardrails between developers and the business are essential in AWM, yet we often find the two sides are not in sync. For smaller managers, where inhouse technology teams are often limited, building cross-functional literacy can be particularly challenging because operations and client advisors may lack exposure to technology, and technical knowledge across their managers and is often concentrated in a few overstretched individuals.

Strict, fast-moving regulations

The new White House administration’s regulatory agenda for AI is still in the planning phase. But this should not slow the pace of your AI projects. Today, many technology-neutral regulations exist that apply to AI. Examples include regulations related to fiduciary duties, data protection, anti-fraud, consumer protection and insider trading. There is also emerging, AI-specific legislation — marked by a surge in proposed, draft and passed legislation at both the federal and state levels — as well as proposed AI-related rules from the Securities and Exchange Commission, regarding potential conflicts of interest, outsourcing and more. AWMs should conduct impeccable compliance while also being adaptable. Identifying relevant regulatory requirements, continuously tracking and adapting to changes can also be challenging for compliance functions who are already time and resource constrained. Additionally, global, federal and state obligations are rapidly increasing.

Three innovative moves for managers

AI is moving fast: creating new revenue streams and experiences, transforming workforces and more. This is not the future — it is right now. Whether your firm is large or small, you should act today: update your strategy, technology (including data), workforce and risk management. Here are three steps to help get ahead of the pack — and stay ahead.

1. Align your AI decisioning to your firm's strategy

Educate leadership and management on AI capabilities and align them with the firm's strategy, adopting a "problem-first" approach to your AI capabilities.  By focusing on specific business problems, you can lay the foundation for Responsible AI, clarifying intended impacts, enabling early risk mitigation, and aligning AI with organizational values and regulatory standards. This approach not only helps meet regulatory requirements. It can also position AI as a strategic driver for sustainable growth and meaningful business value.

2. Define your risk appetite — then adopt Responsible AI practices

Assess your firm’s tolerance for different AI-related risks. Then, deploy a Responsible AI framework to help manage risks appropriately. Effective responsible AI practices can help unlock value: It helps improve AI output quality and can reduce the need for costly fixes. For managers, responsible AI should include an inventory of AI assets, a risk taxonomy to assess the risk of each AI asset and oversight and controls to help mitigate inherent risk in line with your chosen risk appetite. Advisers should be able to articulate to various stakeholders where AI-based tools are used. A scalable framework and nimble governance structure can help you keep pace with the swiftly moving regulatory landscape.

3. Continue to invest in human knowledge

Talent has been central to the asset and wealth management business for decades. People continue to be decisive in the age of AI. Specialized talent with domain experience should supervise both AI development and governance to help align your firm’s AI with employee and client expectations for responsible use and high-quality outputs.  Equip your teams with a clear understanding of your firm's AI risk appetite, promote awareness of acceptable AI use and provide the tools, culture and skills that can drive innovation.

Follow us