국내외 챗GPT에 대한 관심이 높아지는 가운데 반도체 업계에서도 챗GPT가 △GPU △서버 △메모리 등 반도체 수요를 견인할 것으로 전망해 향후 생성AI와 대규모 인공지능 모델이 이끌 시장이 주목받고 있다.
Nvidia GPUs benefit from ChatGPT… AMD also launches server chips in succession
“GPT4 unit parameters, high memory capacity, high speed, low power required”
As interest in ChatGPT increases domestically and internationally, the semiconductor industry is also expecting ChatGPT to drive demand for semiconductors such as GPUs, servers, and memory, and the market led by generative AI and large-scale artificial intelligence models is attracting attention in the future.
Domestic and international interest in ChatGPT is rising rapidly. ChatGPT, which achieved 100 million users worldwide in just two months, has seen searches for 'Chat GPT' and 'Chatgpt' increase by 8,428% and 1,424% respectively in Naver search trends compared to December. Globally, ChatGPT Google search volume reached 100, the highest in February, showing significantly higher interest compared to 0-2 in November.
■ Estimated 30,000 GPUs to be Invested in ChatGPT Commercialization ▲Major tools and areas of application of generative AI (Source: TrendForce)
According to a recent market research firm TrendForce, as ChatGPT prepares for commercialization, GPU demand is estimated to reach 30,000 units based on Nvidia's A100 GPU.
TrendForce noted that in 2020, the number of learning parameters used in developing ChatGPT's model increased to 180 billion, and the number of GPUs required to process the learning data was approximately 20,000. It also predicted that the number of GPUs required for commercialization in the future will further increase to 30,000.
Accordingly, as generative AI becomes a trend, GPU demand is expected to increase significantly, which is expected to increase profits for related supply chain companies. In particular, Nvidia is receiving the most market attention, as it is expected to benefit the most from AI development due to the expected success of ChatGPT.
The NVIDIA DGX A100 Tensor Core 80GB, an AI workload, provides 5 petaflops (5,000 trillion operations per second) of performance, providing options for big data analysis and AI acceleration. In addition, AMD is continuously launching the Instinct MI100-300 series, an AI-based application, and is targeting the server market. At the same time, in the recent CES keynote speech, CEO Lisa Su mentioned ChatGPT and said, “It can reduce the training time from months to weeks, which can lead to energy savings of millions of dollars.”
TrendForce claims that TSMC will play a key role as a major foundry for advanced computing chips in the relevant supply chain. Currently, TSMC is the main factory for Nvidia chip production and has a solid position in advanced nodes and chiplet packaging. In addition, new demand for ABF (Ajinomoto Build-up Film) substrates and benefits for AI chip developers are expected.
Lee Seung-woo, head of the research center at Eugene Investment & Securities, commented on ChatGPT at the Semiconductor Development Strategy Forum and predicted that “depending on how much AI services expand, the demand for computation and server expansion will increase,” but analyzed that the important thing is the increase in the number of users.
He mentioned that “the world’s current number one supercomputer, Frontir, has a computing speed of 1.1 exaflops (1.1 trillion calculations per second), and it takes about 3.5 days to train GPT-3,” and analyzed that the cost of one transaction is about 2 cents on average, and for example, if 30 million people ask an average of 10 questions a day, 300 million transactions will occur, which will require a daily computing cost of 6 million dollars (about 8 billion won).
“AI learning itself is not a major variable in increasing demand,” the center director said. “What is important is not learning, but traffic resulting from an increase in the number of users, which can drive demand for chips for server expansion and model building.”
■ ChatGPT4, Memory Hyperscaling Requirements ▲SK Hynix Vice President Lee Seong-hoon giving a presentation at the Semiconductor Development Strategy Forum
“If GPT4 is actually commercialized, the increase in data processing speed and capacity is expected to be enormous.”
Lee Seong-Hoon SK Hynix Future TechnologyThe vice president of research said this at the last semiconductor development strategy forum.
Following ChatGPT-3.5's shock to the world with 175 billion parameters, the upcoming version 4.0 is expected to have 1 to 100 trillion parameters, raising expectations among the industry and users.
The semiconductor industry at home and abroad is also paying attention to ChatGPT. From the ultra-high-speed wireless transmission of massive amounts of data through 5G and 6G infrastructure to the AI shock caused by ChatGPT following autonomous driving and connected cars, it basically requires high-capacity, ultra-high-speed, and hyper-scaling of memory.
In addition, major search engine providers such as Microsoft, Google, and Naver, which will develop and operate ChatGPT, will inevitably see an increase in demand for low-power solutions to reduce ESG trends and energy usage costs.
Vice President Lee Seong-hun said, “Ultimately, the value of memory lies in solutions that provide hyper-scaling storage, high speed, and extremely reduced power consumption,” and added in his future outlook that, “In order to overcome the limitations of scaling, DRAM requires pattern miniaturization, increase in cell capacitors, and improvement in junction resistance in interconnections.”
As the deficit is expected to widen in the first half of this year due to a sharp decline in memory demand, attention is being paid to whether the success of ChatGPT will drive demand for memory, servers, and AI processing processors.