Finance Embraces AI: Money Tight, Talent Scarcer

robot
Abstract generation in progress

Why are financial institutions underinvesting in AI strategies?

Reporter Jiang Xin

Banks handle thousands of loan approval tasks daily. How can they conduct more efficient spot checks with limited head office staff? Artificial Intelligence (AI) large models are transforming banking workflows: by leveraging large models, banks can comprehensively review all loans, shifting from sampling to full coverage. This approach has also helped many banks identify numerous violations, reducing losses.

“Tomorrow I have a meeting with a business owner client. What products should I recommend? What tricky questions might the client ask?” At an insurance company, a newly hired agent is repeatedly practicing with an AI “coach” on their phone. Soon, this agent, who initially struggled to close deals, becomes a top performer with successful signings.

On March 17, 2026, during the launch of the report AI Boosts Innovation and Upgrades in Mainland China and Hong Kong Financial Services, PwC China management consulting partner Wang Jianping shared impressive AI application cases with the Economic Observer.

From October 2025 to January 2026, PwC surveyed 201 financial service professionals across mainland China and Hong Kong’s banking, insurance, and asset management sectors, conducting 20 in-depth interviews. The survey depicts the current state of AI adoption in finance: enthusiastic but with clear pain points.

Strategic Importance, Insufficient Investment

As a capital-rich industry, what is the current state of AI application in finance?

According to PwC’s survey, financial institutions’ AI applications can be categorized into three levels: The first level involves internal use, such as internal knowledge base searches, which are relatively mature in leading banks and less perceptible to external clients; the second level includes applications with client awareness, like intelligent customer service, investment advice, and post-trade services, with anti-money laundering detection and internal audits already providing significant empowerment; the third level involves direct transactions with clients or providing research and wealth management advice. Due to the need for transparency, fairness, and traceability of reasoning, large-scale implementation at this level remains challenging in the short term.

Ni Qing, head of PwC China’s Asset and Wealth Management sector, notes that different industries focus on different AI deployment areas. Banking emphasizes risk control, anti-money laundering, and compliance; insurance focuses on agent capability enhancement, customer service, and claims; while asset and wealth management prioritize investment and portfolio management, data, and market analysis.

“Responding institutions have already seen initial returns of 10%–15% from AI investments, but they value the long-term benefits of AI in enhancing market position and strategic growth more,” Wang Jianping said. However, a clear gap exists between ideals and reality: the survey shows that 61% of financial institutions allocate less than 10% of their technology budgets to AI.

Why isn’t investment increased? Wang Jianping explains that in the current market environment, increasing tech investment is difficult. Institutions need to restructure—cut traditional tech spending, reduce costs for developing and testing traditional businesses, and redirect those funds into AI. If the overall budget remains unchanged, raising the proportion of tech and AI investments significantly would heavily impact traditional technology spending. Moreover, financial institutions prioritize prudent management. During transformation, their pace is slower than other industries, yet AI technology develops rapidly, making it hard for the industry to keep up, resulting in a situation where AI is strategically important, investment should increase, but short-term gains are limited.

Talent and Data Dilemmas

“Internal training takes a long time, and external recruitment faces fierce competition for tech talent,” a HR professional at a state-owned bank told the Economic Observer about the difficulties in hiring AI-related talent.

The survey found that only 29% of financial institutions have successfully cultivated an “AI-first” culture. Talent shortages and rigid organizational structures are more serious obstacles than underinvestment or technical issues.

Li Weibin, partner at PwC China management consulting, states: “Respondents generally report that a major challenge is recruiting ‘hybrid’ talent who understand both business and algorithms. Training existing staff and creating incentives to encourage AI-driven transformation are crucial for establishing an AI-first culture. Some financial institutions are exploring special mechanisms to break traditional recruitment models. For example, some insurance companies have established AI research institutes with special compensation packages to attract senior talent; some banks promote internal talent transformation through training and assessments. Equally important, senior management must lead by example and actively advocate for AI adoption.”

Beyond talent and organizational culture, data remains a key constraint. The survey shows that the top three barriers to increased AI investment are data availability (30%), regulatory pressure (20%), and the need to prioritize maintaining existing core systems (14%). Data security and privacy concerns are the primary challenges in data management, leading 90% of financial institutions to rely on proprietary internal data to support AI applications.

Wang Jianping believes that the data needed for AI is shifting from structured to unstructured data. However, financial institutions’ unstructured data—such as bank credit policies, risk control manuals, audit knowledge, or insurance survey manuals—are often not integrated into data governance systems.

The “hallucination” problem inherent in large models makes it difficult to meet the accuracy requirements of traditional business data. For example, in auto insurance claims, when a customer reports an incident, backend systems may find relationships among the claim vehicle, the vehicle involved in the accident, the surveyor, and the dealership, which could indicate fraud. If only the claim data is considered without these relationships, the large model may fail to detect fraud risks. Therefore, ontology modeling is necessary to identify relationships among objects and incorporate these into the large model to improve detection accuracy,” Wang Jianping explained.

As AI technology advances, financial institutions also face new challenges in risk control and security. The report finds that while basic protections are in place at input and output stages, there is a lack of effective automated monitoring tools to handle dynamic risks during model operation.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin