Goldman Sachs blocks Anthropic's Claude access for Hong Kong employees recently. (Photo: Reuters)

Goldman Sachs blocks Anthropic Claude in Hong Kong as AI tension between US and China rises

Goldman Sachs has restricted the use of Anthropic's Claude in Hong Kong. This comes at a time when US AI companies are increasingly accusing Chinese firms of using their models to train competing systems at a fraction of the cost.

by · India Today

In Short

  • Goldman Sachs blocks Anthropic's Claude access for Hong Kong employees recently
  • The move is linked to contract interpretation by Anthropic
  • US firms accuse Chinese companies of AI model data misuse

The AI race between China and the United States is intensifying. The governments of both countries are hitting back at each other in various ways. A recent report suggests that US investment bank Goldman Sachs has stopped its bankers in Hong Kong from using Anthropic’s AI models. US AI models such as ChatGPT and Claude are banned in mainland China as part of the so-called Great Firewall. However, the former British territory Hong Kong has long operated mostly outside of Chinese censors and restrictions.

A report by Financial Times, citing sources familiar with the matter, said employees of the Wall Street bank in the Chinese territory of Hong Kong couldn’t access Claude, either directly or through internal AI tools, a few weeks ago.

This move by Goldman Sachs did not come due to any action or coercion by the Chinese government. A person familiar with the decision stated to FT that the bank took a strict view of its contract with Anthropic after consulting the company. As a result, Goldman blocked its Hong Kong employees from using Anthropic’s products, though this does not apply to other AI providers like OpenAI.

Meanwhile, an Anthropic spokesperson said its products were never supported in Hong Kong.

Hong Kong is the main hub for investment banking and finance in Greater China, where global banks manage cross-border deals like trading, mergers, and share sales.

Concerns over AI data and competition

American AI companies have accused Chinese firms of using their AI models to train local systems at a fraction of the cost, a process known as AI distillation. Anthropic recently alleged DeepSeek, Moonshot AI, and MiniMax secretly generated over 16 million conversations with its AI chatbot Claude, using more than 24,000 fake accounts, to harvest its intelligence and train competing models. OpenAI and Google have also raised similar concerns about Chinese firms, warning that such practices could bypass years of costly AI research.

OpenAI told US lawmakers in February it had caught DeepSeek trying to secretly copy its most powerful AI models and warned that the company was developing new methods to disguise its actions.

China tightens control over AI firms

Meanwhile, China is increasingly protecting its AI companies. The country recently barred Manus AI, a startup that made big buzz last year due to its AI agent technology, from selling itself to Mark Zuckerberg and his company Meta.

China does not want a high-performing AI system, built by its own talent and research, to end up in American hands, especially at a time when the US is acting as its main rival in the AI race.

- Ends