6
Singapore miles ahead in depth of technical knowledge: OpenAI research chief

Singapore miles ahead in depth of technical knowledge: OpenAI research chief

Developers in the city-state have deep technical knowledge while policymakers have a level of technical literacy unseen in other parts of the world, he says.


Mark Chen, chief research officer at OpenAI, says the company’s research pipeline to develop new AI models typically unfolds in phases.

Mark Chen, chief research officer at OpenAI, says the company’s research pipeline to develop new AI models typically unfolds in phases.

Singapore is well-positioned to reap success in artificial intelligence (AI), according to one of the top minds in the field.

Developers in the city-state have deep technical knowledge, and policymakers here have a level of technical literacy unseen in other parts of the world, said OpenAI’s research chief Mark Chen.

“I mean this very, very seriously – there’s a gap between Singapore and everywhere else,” he told The Business Times in a recent interview.

Chen joined OpenAI – the company behind ChatGPT – in 2018, and assumed his current role in January.

He recalled an interaction with former prime minister Lee Hsien Loong during OpenAI chief executive Sam Altman’s 2023 world tour to discuss AI-related issues.

“Sam put me on the spot and had me give (Lee) a demo – and that’s kind of when I learnt he actually also does programming, and I was just so surprised,” Chen recounted.

“You go to the government, and people are so literate in a way that I didn’t feel in other parts of the world,” he added.
 


On a per capita basis, Singapore boasts the highest proportion of paid subscribers to ChatGPT globally, with one in four people using the tool weekly.

In October 2024, OpenAI announced plans to establish a regional hub in Singapore by the end of the year – its second Asia-Pacific office after Tokyo – with plans to hire five to 10 staff initially.

Asked if Singapore could evolve into a research hub for OpenAI, Chen said: “It could be – again, we have to see how things evolve... But there’s certainly the talent here.”

San Francisco remains the central hub for OpenAI’s research activities, added Chen, who oversees more than 600 research scientists, engineers, and software developers. “We do have a very big centre of gravity there,” he said. “London is another epicentre of talent for research.”
 

How OpenAI trains its models

OpenAI’s research pipeline to develop new AI models typically unfolds in phases, Chen explained.

The first is “pre-training”, where the model processes large datasets, learns patterns, and makes predictions based on the data.

The next phase, “post-training”, involves refining the model to understand user preferences. For example, this step includes presenting multiple responses to the same prompt and training the model to recognise which response is most useful – a capability already embedded in ChatGPT.

The next phase, “post-training”, involves refining the model to understand user preferences. For example, this step includes presenting multiple responses to the same prompt and training the model to recognise which response is most useful – a capability already embedded in ChatGPT.

“We can reinforce that and have the model become a lot more useful, and be able to satisfy the user in tandem,” Chen said.

The final phase is reinforcement learning, which aims to develop “reasoning” capabilities within the models – an area in which OpenAI is “investing heavily”, he added.

In September 2024, OpenAI launched the first ChatGPT model powered by reinforcement learning – known as o1 – as a preview. The full version was released in December.

Altman, in an 18 January post on X (formerly Twitter), said an updated model, o3-mini, is expected to launch in “a couple of weeks”.

The o1 model, Chen explained, was designed to allocate more time to processing complex queries.

“It’s very much like a human... If you had a human, you asked them (a question) and they had to respond to you immediately, they’re not going to be very good at reasoning,” he said.

An in-house proprietary algorithm further powers the model’s reinforcement learning capabilities.

“Under the hood, there is a very powerful algorithm that we discovered, that we do think qualifies this as a paradigm shift – and I do think it’s one of our strategic advantages right now,” Chen said.
 

Source: The Business Times © SPH Media Limited. Permission required for reproduction.

Related Content

Subscribe Icon
The latest business insights and news delivered to your inbox
Subscribe now