온디바이스 AI를 비롯한 AI 기능 탑재 제품·솔루션 개발이 시장 전반과 산업계에서 활발하게 일어나고 있다. 내년부터 본격적으로 마이크로 엣지 영역에서부터 초거대 언어모델을 이용하는 서버까지 광범위하게 AI 융합이 확산될 것으로 전망되는 가운데 엣지 영역 AI 반도체 블루오션을 차지하기 위한 경쟁도 치열해지고 있다.
▲Butter heating test of the DeepX DX-M1 chip. The butter remains unmelted.
DeepX to Participate in SEDEX 2024...Butter Test Revealed
Demonstration of AI semiconductor DX-M1 chip with low power and low heat generation strengths
“High demand for intelligent and unmanned systems, opportunities in CCTV and industrial PCs”
The development of products and solutions equipped with AI functions, including on-device AI, is actively taking place across the market and industry. Starting next year, AI convergence is expected to expand widely from micro-edge areas to servers using ultra-large language models, and competition to occupy the blue ocean of edge AI semiconductors is also intensifying.
AI semiconductor fabless company DeepX participated in the 2024 Semiconductor Expo (SEDEX) held at COEX in Gangnam-gu, Seoul on the 23rd and presented a real-time demo of the latest AI model, the Vision Language Model (VLM), based on the DX-M1 M.2 module, running on-device in multi-channel.
DeepX is a leading domestic fabless company developing AI semiconductors. In 2024, it exhibited and unveiled its products around the world, including in Taiwan, China, Japan, Europe, and the United States, and provided samples to approximately 120 global companies. Accordingly, it was revealed that they are currently collaborating with about 20 companies to develop mass-produced products.
▲Semiconductor Fair 2024 DeepX Exhibition Booth Site
At the on-site booth, a 'butter heat test' was in progress to test the low heat generation due to low power that DeepX emphasizes. It was a performance that visibly showed the visitors the lower heat generation aspect compared to competitor boards during the AI calculation process through 'butter'.
This is an event that emphasizes the low power consumption advantage of the DX-M1 chip compared to the same performance, and the major challenges surrounding the current AI semiconductor market are due to △cost △performance △power consumption △size △interoperability, etc.
In particular, power efficiency has emerged as an important consideration for customers as it affects ESG aspects, operating costs of AI server products, and battery performance efficiency in on-device AI products. DeepX stated that “we need to solve the power consumption and heat generation issues of GPUs, and DeepX’s flagship NPU technology provides 20 times the efficiency in power and heat generation compared to GPUs,” and is meeting the market’s demands.
The server-grade product DX-H1 PCIe module, in collaboration with global server companies HP and K2US, showcased a demo running the latest object recognition AI algorithm in real time on over 100 channels, and attracted attention with various real-time demos applicable to smart cameras, robot platforms, industrial embedded systems, servers, and data centers.
▲ Server-grade product DX-H1 PCIe module demo “There is a huge need for companies to integrate AI semiconductors into new product development to make them more intelligent or unmanned,” said Lee Ah-hyung, team leader at DeepX. “We expect the first mass production of DeepX to be in intelligent CCTVs, and there are many collaborations underway in industrial PCs as well.”
The team leader said, “Since fabless still lacks reliability due to the lack of references, we expect the volume to increase after confirming the market response and product reliability through partial mass production of the product,” and “DeepX plans to create references and build market share in B2B products.”.
Meanwhile, it is also essential for on-device AI and AI semiconductor companies to have solutions and software stacks. As AI optimization and lightweight technology are required, as well as the ability to have interoperability and operability between heterogeneous hardware, collaboration between hardware-embedded-software (AI) is more important than anything else.
DeepX reported that among its development team of nearly 70 people, more than 60% of them are people developing and supporting the software stack, and that it is providing an automated framework that allows customers to develop AI models with drag-and-play in the DeepX development environment.