PR CENTER

PR센터

홈>PR센터>보도자료

보도자료

[GIGABYTE] 제너레이티브 AI를 활용하려면 "훈련"과 "추론"에 대해 배워야 합니다

2023.07.04

[GIGABYTE] 제너레이티브 AI를 활용하려면 "훈련"과 "추론"에 대해 배워야 합니다

  물론 생성형 AI는 실제로 붓을 들고 있는 로봇이 아닙니다. 그러나 이 연상시키는 이미지는 AI의 미래가 얼마나 사랑스럽고 유능하며 영감을 줄 수 있는지를 나타냅니다.   생성 AI의 기량을 재탕하기보다는, 우리는 계속해서 커튼을 뒤로 젖힐 것입니다 : 모든 생성 AI는 "훈련"과 "추론"이라는 두 가지 필수 프로세스로 요약됩니다. 그들이 어떻게 작동하는지 이해하면, 당신은 그것들이 당신을 위해 일할 수 있는 확고한 위치에 있게 될 것입니다. 챗봇인 ChatGPT를 예로 들어 보겠습니다. "GPT"의 "T"는 트랜스포머(transformer)의 약자로, 대규모 언어 모델(Large Language Model) 또는 줄여서 LLM이라고 하는 자연어 처리(NLP)의 하위 집합에서 사용하는 아키텍처입니다. LLM은 딥 러닝 및 인공 신경망 (ANN) 기술을 통해 레이블이 지정되지 않은 텍스트의 큰 코퍼스 (우리는 수조에 달하는 단어 수에 대해 이야기하고 있습니다)에서 스스로를 "훈련"할 수 있기 때문에 인간처럼 읽고 쓰는 것을 컴퓨터에 가르치는 주된 방법이되었습니다. 간단히 말해서 위키백과 전체에 해당하는 내용을 통해 읽고 쓰는 법을 스스로 배웠기 때문에 거의 모든 주제에 대해 대화할 수 있습니다. 쿼리에 응답하기 위해 과거 훈련을 활용하는 부분을 "추론"이라고합니다.용어집:《자연어 처리란?》《딥러닝이란?》 자, 그렇다면 Stable Diffusion이나 Midjourney 또는 무수한 텍스트-이미지 모델 중 하나는 어떻게 작동합니까? ChatGPT와 크게 다르지 않은데, 이번에는 언어 모델에 생성 이미지 모델이 첨부되어 있다는 점을 제외하면 말입니다. 이 모델은 또한 많은 양의 디지털 텍스트와 이미지에 대해 훈련되어 AI가 이미지를 텍스트로 변환하거나(단어를 사용하여 그림에 있는 내용을 설명) 또는 그 반대로(그리도록 요청한 것을 그리기) 변환할 수 있습니다. 최종 작품을 더욱 매력적으로 만들기 위해 마스킹이나 블러를 재치 있게 주입하는 것은 스마트폰으로 셀카를 찍어본 사람이라면 누구에게나 제2의 천성처럼 느껴질 것입니다. AI에게 올바른 프롬프트를 제공하기 위해 충분한 노력을 기울이면 AI가 생성한 예술이 미술 경연 대회에서 블루 리본을 수상한 것은 놀라운 일이 아닙니다.이제 가장 인기 있는 형태의 생성 AI가 작동하는 방식에 대한 기본 사항을 보여드렸으므로 인공 지능의 이 흥미롭고 새로운 돌파구를 만드는 데 필요한 관련 기술과 도구에 대해 자세히 알아보겠습니다.《용어집: 인공 지능이란 무엇입니까?》 일반적으로 AI 훈련 과정에서 레이블이 지정된 데이터의 바다가 알고리즘에 쏟아져 "연구"됩니다. AI는 추측을 한 다음 정확성을 높이기 위해 답을 확인합니다. 시간이 지남에 따라 AI는 추측에 능숙해져서 항상 정확한 추측을 할 것입니다. 즉, 작업하기를 원하는 정보를 "학습"한 것입니다.   의심의 여지 없이, 상호 연결된 전자 장치가 매일 수집하는 방대한 양의 데이터인 빅 데이터는 AI가 학습할 수 있는 풍부한 정보를 갖도록 하는 데 큰 도움이 되었습니다. 이전의 훈련 방법은 "레이블이 지정된" 데이터에 의존했고 인간 프로그래머가 감독했습니다. 그러나 최근의 발전으로 AI는 레이블이 지정되지 않은 데이터를 사용하여 자체 지도 또는 반 지도 학습에 참여할 수 있게 되어 프로세스가 크게 빨라졌습니다.《용어집: 빅 데이터란 무엇입니까?》말할 필요도 없이 AI를 훈련시키는 데 필요한 컴퓨팅 리소스의 범위는 숨이 멎을 뿐만 아니라 기하급수적으로 증가하고 있습니다. 예를 들어, 1년에 출시된 GPT-2018은 약 8.0페타플롭/초일(pfs-일)의 리소스를 사용하여 "96개의 GPU에서 한 달" 동안 훈련되었습니다. 3년에 출시된 GPT-2020는 3,630pfs-day의 리소스를 사용했습니다. GPT-4의 현재 반복에 대한 숫자는 사용할 수 없지만 관련된 시간과 컴퓨팅이 GPT-3보다 훨씬 더 컸다는 것은 의심의 여지가 없습니다.《용어집: GPU란 무엇입니까?》따라서 AI 교육에 참여하고 싶다면 강력한 GPU 컴퓨팅 플랫폼이 필요합니다. GPU는 병렬 컴퓨팅을 통해 많은 양의 데이터를 처리하는 데 탁월하기 때문에 선호됩니다. 병렬화 덕분에 앞서 언급한 트랜스포머 아키텍처는 공급되는 모든 순차적 데이터를 한 번에 처리할 수 있습니다. 안목 있는 AI 전문가에게는 AI를 훈련하는 데 걸리는 시간을 더욱 줄이는 것이 목표라면 GPU 내의 코어 유형도 차이를 만들 수 있습니다.자세히 알아보기:《용어집:병렬 컴퓨팅이란 무엇입니까?》《용어집: Core란?》《CPU vs. GPU: 어떤 프로세서가 적합합니까? GIGABYTE 기술 가이드》     추론: 작동 방식, 사용할 도구 및 GIGABYTE가 도울 수 있는 방법 AI가 제대로 훈련되고 테스트되면 추론 단계로 넘어갈 차례입니다. AI는 익숙하지 않은 새로운 데이터의 홍수에 노출되어 가라앉는지 헤엄치는지 확인합니다. 생성 AI의 경우, 이것은 명왕성의 도심 범죄에 대한 에세이 작성 요청부터 19 세기 일본 예술가 우타가와 히로시게 (Utagawa Hiroshige)의 스타일로 말을 탄 우주 비행사의 그림을 그리라는 요구에 이르기까지 모든 것을 의미 할 수 있습니다.AI는 이러한 새로운 입력의 매개 변수를 광범위한 학습 프로세스 중에 "학습"된 것과 비교하고 적절한 출력을 생성합니다. 이러한 전방 및 후방 전파가 레이어 사이에서 분리되는 동안 다른 흥미로운 일도 일어나고 있습니다. AI는 다음 훈련 세션을 위해 인간 사용자로부터 받은 응답을 컴파일합니다. 잘한 일에 대해 칭찬을 받을 때 주의를 기울이고, 인간이 그 결과물을 비판할 때 특히 주의를 기울입니다. 이러한 지속적인 훈련과 추론 루프는 인공 지능을 매일 더 스마트하고 생생하게 만드는 것입니다. GIGABYTE의 G293-Z43은 소형 70U 섀시에 업계 최고 수준의 초고밀도 AMD Alveo™ V2 추론 가속기 카드 <>개를 제공합니다. 이 설정은 뛰어난 성능과 에너지 효율성을 제공할 뿐만 아니라 대기 시간을 단축합니다. 이러한 고밀도 구성은 GIGABYTE의 독자적인 서버 냉각 기술에 의해 가능합니다.   생성형 AI는 소매업과 제조업에서 의료와 은행업에 이르기까지 우리 삶의 점점 더 많은 측면에 침투하고 있습니다. 결국 선택하는 서버 솔루션은 AI를 "훈련"하기 위해 데이터를 처리하든, 현실 세계에서 "추론"할 수 있도록 AI 모델을 배포하든, 생성형 AI 여정의 어느 부분에 힘을 실어주고 싶은지에 따라 달라집니다. 새로운 AI 발명품의 기량은 프로세서 코어의 아키텍처와 같은 미세한 것부터 GIGABYTE Technology의 전체 솔루션과 같은 포괄적인 것에 이르기까지 AI 작업을 위해 설계된 수많은 전용 도구가 있다는 것을 이해하면 도달할 수 없는 것처럼 보이지 않을 것입니다. 성공을 달성하기 위한 도구가 마련되어 있습니다. 당신이해야 할 일은 인공 지능이 어떻게 "당신의 삶을 업그레이드"할 수 있는지 발견하는 것입니다.

상세보기

[GIGABYTE] 실리콘 밸리 스타트업 Sushi Cloud, GIGABYTE와 함께 베어메탈 서비스 출시

2023.06.16

[GIGABYTE] 실리콘 밸리 스타트업 Sushi Cloud, GIGABYTE와 함께 베어메탈 서비스 출시

    Sushi Cloud는 베어 메탈 버전의 클라우드 서비스로 시장에서 경쟁하는 "챌린저 클라우드 컴퓨팅 회사"로 자리매김하고 있습니다. 베어 메탈은 최종 사용자에게 비용을 지불한 컴퓨팅 리소스에 대한 독점적인 액세스 권한을 제공합니다. GIGABYTE의 R152-Z30 랙 서버는 Sushi Cloud의 베어 메탈 인프라의 중추입니다.   하드웨어 측면에서 우리 모두는 서버의 강력한 프로세서가 실제로 빛을 발하려면 인상적인 메모리와 저장 용량이 필요하다는 것을 알고 있습니다. R152-Z30은 8채널 RDIMM/LRDIMM DDR4 및 16개의 DIMM 슬롯을 지원합니다. 기가바이트의 독특한 설계는 채널당 2개의 DIMM을 사용할 때에도 최대 메모리 속도를 허용하며, BIOS 설정에서 3200개의 DPC에 대해 2Mhz의 클럭 속도를 활성화할 수 있습니다. 스토리지의 경우 R152-Z30에는 4개의 3.5인치 SATA 핫스왑 가능 HDD/SSD 베이가 있으며 2.5인치 HDD/SSD도 지원할 수 있습니다. PCIe/NVMe 인터페이스를 갖춘 초고속 M.2는 솔리드 스테이트 스토리지의 잠재력을 최대한 발휘하는 데 사용됩니다. 이러한 하드웨어 구성을 통해 R152-Z30은 가장 힘든 컴퓨팅 작업을 처리할 수 있습니다.용어집:《PCIe란? 》《NVMe란? 》Sushi Cloud의 고객은 자체 운영 체제와 소프트웨어를 선택하는 데 익숙하며, 이는 R152-Z30이 의사가 주문한 바로 그 제품임을 의미합니다. GIGABYTE 서버는 여러 저명한 소프트웨어 파트너의 인증을 받았으며 다양한 생태계에서 호환됩니다. 주요 소프트웨어 제휴 파트너 프로그램의 회원인 GIGABYTE의 지위는 공동 솔루션을 신속하게 개발하고 검증할 수 있음을 의미합니다. 이를 통해 고객은 효율성, 유연성 및 비용 최적화를 통해 IT 인프라 및 애플리케이션 서비스를 현대화할 수 있습니다.     원격 관리는 매우 중요한 GIGABYTE 서버의 또 다른 기능입니다. Sushi Cloud의 서버는 종종 다른 국가에 설치되기 때문에 지원 직원은 고객에게 적절한 서비스를 제공하기 위해 서버 관리 시스템에 빠르고 안정적으로 액세스해야 합니다. 서버에 내장된 특수 프로세서를 통해 GIGABYTE는 다음과 같은 두 가지 무료 관리 응용 프로그램을제공합니다.● GIGABYTE 관리 콘솔서버 또는 소규모 컴퓨팅 클러스터의 관리 및 유지 보수를 위해 사용자는 사전 설치된 GIGABYTE 관리 콘솔을 사용하여 브라우저 기반 그래픽 사용자 인터페이스(GUI)를 통해 실시간 상태 모니터링 및 관리를 수행할 수 있습니다. GIGABYTE Management Console은 표준 IPMI 사양, 자동 이벤트 기록, 통합 SAS/SATA/NVMe 장치 및 RAID 컨트롤러 펌웨어를 지원합니다.용어집:《컴퓨팅 클러스터란 무엇입니까? 》《RAID란? 》● GIGABYTE Server ManagementGSM은 인터넷을 통해 서버 클러스터를 동시에 관리할 수 있는 소프트웨어 제품군입니다. GSM은 공식 GIGABYTE 웹 사이트에서 다운로드 할 수 있습니다. GSM은 IPMI 및 Redfish 표준을 준수하며 실시간 원격 제어를 위한 GSM 서버, 원격 모니터링 및 관리를 위한 GSM CLI, OS를 통한 데이터 검색을 위한 GSM 에이전트, 모바일 앱을 통한 원격 관리를 위한 GSM 모바일, 실시간 모니터링 및 관리를 위해 VMware vCenter를 사용할 수 있는 GSM 플러그인과 같은 시스템 관리 기능의 전체 범위를 포함합니다.

상세보기

[GIGABYTE] 0 to 100 KPH in 3.3 Seconds! NTHU Builds Electric Formula Student Race Cars with GIGABYTE

2023.06.15

[GIGABYTE] 0 to 100 KPH in 3.3 Seconds! NTHU Builds Electric Formula Student Race Cars with GIGABYTE

Representing Hsinchu’s Tsing Hua University, NTHU Racing is one of Taiwan’s top Formula Student racing teams. In 2019, its electric formula student race car, the “TH04”, took second place in Formula SAE Japan. In August of 2022, its new and improved, 100% manufactured in Taiwan “TH06” raced for the gold in Formula Student Germany. Tsing Hua University built the “TH06” using GIGABYTE Technology’s W771-Z00 and W331-Z00 Tower Server/Workstation products. From finite element analysis (FEA) and computational fluid dynamics (CFD) modelling and analysis during the design phase, to racing simulations and experimenting with new tech during the testing phase, GIGABYTE servers provided NTHU Racing with the processing power and versatility it needed to engineer a high-performing electric vehicle that can go from 0 to 100 kph in 3.3 seconds—almost on par with Tesla’s Model S, which can go from 0 to 60 mph in 2.6 seconds.     The thunder of engines. The odor of burnt rubber. A checkered flag waving beneath a startling sky, above young hearts yearning to make their mark on history.To a layperson, automobile racing may seem like the pastime of motorsport enthusiasts and the vanity projects of big auto brands. But an engineer looks at a race car and sees something more. They see the care and expertise that went into the design; they see, too, the skill and daring it takes to push the machine to its limit. Every car that rolls out of the shop is a feat of human ingenuity. A car that outperforms its peers is an achievement of the highest order.Formula Student, a globe-spanning series of engineering design competitions, uses auto racing to test the theoretical knowledge of students in a practical context. By asking student teams to design and build their own formula-style racing vehicles, the organizers provide a platform for brilliant young minds to demonstrate their engineering skills. Since its conception in the late 1970s, Formula Student has held races around the world. It has hosted teams from over 1,000 universities.Located in Hsinchu, Taiwan’s Tsing Hua University (NTHU) is a comprehensive research university renowned for its science and technology programs. It is also an avid participant in Formula Student. The NTHU Racing Team is made up of more than 70 students from a diverse coalition of colleges, including power mechanical engineering, chemical engineering, computer science, economics, and more. This motley crew of race car aficionados has taken the world by storm. In 2019, NTHU Racing’s electric formula race car, the “TH04”, took second place in Formula SAE Japan. On the Formula Student Electric World Ranking List, NTHU Racing ranked number 23 out of 203 universities in 2022. It beat out some of the world’s most prestigious institutes—including the Massachusetts Institute of Technology.Learn More:《Glossary: What is Computer Science?》《GIGABYTE Helps NCKU Train Award-Winning Supercomputing Team》《GIGABYTE’s ARM Server Boosts Development of Smart Traffic Solution by 200%》In 2022, NTHU Racing set its sights on Formula Student Germany, which was held in August in Hockenheim—home of the Hockenheimring and the Formula One German Grand Prix. NTHU Racing was represented by its newest champion: the “TH06”, an electric formula student race car that's 100% manufactured in Taiwan. Weighing in at 245 kilograms and running on 80 kilowatts of electric power, the TH06 is a four-wheel drive vehicle that can go from 0 to 100 kilometers per hour (kph) in 3.3 seconds. What does that mean? Consider the fact that a topline Tesla Model S needs 2.6 seconds to accelerate from a standstill to 60 miles per hour (the equivalent of 100 kph).The TH06 is a beast of a machine. NTHU Racing built it with GIGABYTE Technology’s W771-Z00 and W331-Z00 Tower Server/Workstation products. From finite element analysis (FEA) and computational fluid dynamics (CFD) modelling and analysis during the design and manufacturing phase, to racing simulations and new tech applications during the testing phase, the W771-Z00 and W331-Z00 provide the processing power and usability the student team needed to build state-of-the-art electric formula race cars. What’s more, NTHU did all this using Taiwanese tech and mostly made-in-Taiwan components—similar to the spirit of the Hon Hai-led MIH Consortium—so that the TH06 can fully represent Taiwan on the international stage.Learn More:《More information about GIGABYTE’s Tower Server/Workstation》《Japan’s Waseda University Simulates Natural Disasters with GIGABYTE GPU Server and Workstations》     Design and Manufacturing: Enterprise-grade Computing Powered by AMD CPUs and GPUs Designing an entire car from scratch is a monumental undertaking. Even before the first parts of the frame are soldered together, and even before the first sets of gears are assembled, a lot of grueling work must be done—on computers.According to Danny Lin (林庭偉), Captain of NTHU Racing, there are over 4,000 components in a formula student race car. Each part of the TH06 had to be rendered and assembled virtually using computer-aided design (CAD) software. The 3D models of key elements, such as the electric vehicle’s motor, inverter, and tires, needed to be tested with programming and numeric computing platforms, such as MATLAB.Even then, the design process is far from the finish line. The computer model of the race car must be subjected to a full set of analyses. One example is computational fluid dynamics (CFD) modelling and analysis, which focuses on the vehicle’s centerline streamline, pressure contour, and velocity profile. This helps to reduce wind resistance and improve the race car’s speed. Another example is finite element analysis (FEA), which checks the vehicle’s steel frame, composite material, chassis, and accumulator. FEA is used to understand how each part of the car will react to stress, vibration, fatigue, and other factors that may affect performance or safety.Learn More:《GIGABYTE Server Technology Helps to Achieve Aerodynamic Vehicle Design》《Israeli Developer of Autonomous Vehicles Chose GIGABYTE Servers》 The powerful GIGABYTE W771-Z00 tower server/workstation brings enterprise-grade computing prowess straight to the desktop. NTHU Racing uses it for a lot of the heavy lifting during the design and manufacturing process of the electric formula race car, including computational fluid dynamics modelling and analysis, and finite element analysis.   GIGABYTE W771-Z00 and W331-Z00 are capable of handling these computationally intensive, iterative computation workloads. This is because they are outfitted with cutting-edge central processing units (CPUs) and general-purpose graphics processing units (GPGPUs) provided by AMD. The W771-Z00 is powered by AMD Ryzen™ Threadripper™ PRO 3995WX CPU and AMD Radeon™ RX 6800 XT GPU. The W331-Z00 runs on AMD Ryzen™ 9 5950X CPU and AMD Radeon™ RX 6800 XT GPU. Through a process known as heterogeneous computing, the two servers are able to accelerate NTHU Racing’s design process and help them bring their magnum opus into existence.Glossary:《What is GPGPU?》《What is GPU?》《What is Heterogeneous Computing?》The AMD Ryzen™ Threadripper™ PRO processor is specially designed for workstation users that work with rendering or the creation of visual effects. This market sector encompasses more than animation studios or render farms; engineering, data science, and oil & gas exploration can also benefit from the CPUs’ high number of cores, leading single or multi-thread performance, and incredible throughput via PCIe lanes. AMD Radeon™ RX 6800 XT GPU is a new generation of workstation GPUs that is ideal for mainstream CAD applications and media or entertainment-related workloads. Based on the AMD RDNA™2 architecture, it provides hardware-based ray tracing and variable rate shading (VRS) to deliver performance improvements across the board.Learn More:《What is Render Farm?》《What is Core?》《What is Thread?》《What is PCIe?》《NCHC and Xanthus Elevate Taiwanese Animation with GIGABYTE Servers》《GIGABYTE’s GPU Servers Help Improve Oil & Gas Exploration Efficiency》Captain Danny Lin uses an example to vividly illustrate how GIGABYTE’s servers are contributing to NTHU Racing’s success. “Computational fluid dynamics is arguably the most compute-intensive part of the design process. High-end processors are needed to handle these workloads. A normal computer may take one or two days just to run one simulation. By using GIGABYTE, we are able to complete the same task in a third, or even a quarter of the time.”     Testing and Practicing: Excellent Versatility for Different Test Environments The computational and versatility features of the W771-Z00 and W331-Z00 are a boon to NTHU Racing beyond the design process. A prototype vehicle must be tested in the field and continuously upgraded. This is usually done through test drives. However, a student team may not have access to such facilities. Not to mention, there is a dearth of racing circuits in Taiwan. The GIGABYTE W331-Z00 is a versatile tower server/workstation that offers convenient and optimal versatility. NTHU Racing runs racing simulations on it to test prototypes and prepare for electric vehicle racing. It also develops autonomous driving tech to prepare for future races designed for self-driving cars.   NTHU Racing overcame this hurdle by running a racing simulator on the GIGABYTE tower server/workstation. The set-up looks almost like an arcade-style racing game, complete with a driver’s seat, a steering wheel, a stick, and a flat-screen monitor to simulate road conditions. To an outsider, this may look like a contender for the metaverse. But for NTHU students, this is a good way to help drivers accommodate to the racetrack and hone their skills—at any time, in any weather.Learn More:《Glossary: What is Metaverse?》《ArchiFiction Achieves Naked-Eye 3D Virtual Reality with GIGABYTE Workstations》There is another reason why the W771-Z00 and W331-Z00’s versatility and usability is essential to NTHU Racing’s goal of winning the gold. Since Formula Student revolves around engineering knowledge, the application of cutting-edge automotive tech is a viable way for the teams to put themselves on the map. NTHU Racing has already made the choice to focus on electric vehicles, because they see the upcoming trend of electric automobiles in the near future. Besides electric vehicles, what other new tech may pave the way for a Taiwanese university to pull ahead of its peers?The answer, according to NTHU Racing, is autonomous vehicles—self-driving cars that have advanced into the upper strata of ADAS (Advanced Driver Assistance Systems). Many universities across Taiwan are already researching self-driving tech, and Formula Student has announced exclusive races for autonomous vehicles. Even as NTHU Racing prepares for electric vehicle competitions, the team is also running driverless simulations on the GIGABYTE servers, so that the Taiwanese team may have an edge in future Formula Student races.Learn More:《Glossary: What is ADAS?》《To Empower Scientific Study, NTNU Opens Center for Cloud Computing》A member of NTHU Racing describes GIGABYTE W771-Z00 and W331-Z00 in this way: “The diverse selection of expansion slots and the optimal thermal design allow us to install or remove different modules according to our need. The powerful GIGABYTE workstations save us a lot of rendering time, so we can achieve more design and computing work with much less effort!”The W771-Z00 is equipped with components that ensure optimal PCIe 4.0 signal quality, such as PCB, PCIe slots, and M.2 connectors. It supports up to four double-wide accelerators, which help it deliver high levels of performance for 3D rendering, deep learning, and high performance computing (HPC). The AMD Ryzen™ Threadrippper™ PRO platform offers eight DIMM slots for 8-channel DDR4-3200 memory. In terms of storage, users can benefit from a mix of hot-swappable HDD and SSD bays.Glossary:《What is Deep Learning?》《What is High Performance Computing (HPC)?》《What is DIMM?》 GIGABYTE is pleased to support NTHU Racing’s effort to develop groundbreaking formula student race cars with its advanced server products. The nurturing of bright young minds is in keeping with GIGABYTE’s long-term CSR, ESG, and SDGs-related efforts, as well as its corporate vision: “Upgrade Your Life”.   The W331-Z00 supports PCIe 4.0, which can provide higher bandwidth for GPUs and faster storage. Like the W771-Z00, it also offers multiple DIMM slots for memory modules. NVMe and SATA storage options are available to fulfill the workstation user’s hot and cold data processing requirements.《Glossary: What is NVMe?》As for thermal design, Automatic Fan Speed Control is enabled in the majority of GIGABYTE’s air-cooled servers. The speed of individual fans will automatically adjust according to feedback from temperature sensors placed strategically in the server. This helps to achieve an optimal balance between cooling and power efficiency.     GIGABYTE Believes in Nurturing the Best and Brightest in Taiwan The two faculty advisors of NTHU Racing are Dr. Chao-An Lin (林昭安), Associate Dean, Chairman of Interdisciplinary Program of Engineering (IPE), and Professor of Power Mechanical Engineering (PME); and Dr. Pei-Jen Wang (王培仁), Professor of PME and Chairman of the Scientific Instrument Center. Both professors have expressed their pride in and support for the student racing team.“From the start, I felt that NTHU had to throw its hat into the ring, so we may know where we stand on the global stage,” says Dr. Lin.GIGABYTE Technology is pleased to be part of Tsing Hua University’s effort to nurture the best and brightest students in Taiwan. By using the most advanced server products to develop groundbreaking formula student race cars, NTHU Racing has every chance to take the gold in the Formula Student competitions and win recognition on the world stage. GIGABYTE is renowned for its commitment to CSR (corporate social responsibility), ESG (Environmental, Social, and Corporate Governance), and Sustainable Development Goals (SDGs). Helping one of Taiwan’s top universities advance its vision of technological education is in keeping with these efforts, as well as the GIGABYTE motto: “Upgrade Your Life”—which is a sincere conviction that brilliant young minds can use high tech solutions to change our world for the better.

상세보기

[GIGABYTE] Japanese Telco Leader KDDI Invents Immersion Cooling Small Data Center with GIGABYTE

2023.06.15

[GIGABYTE] Japanese Telco Leader KDDI Invents Immersion Cooling Small Data Center with GIGABYTE

Japanese telco giant KDDI Corporation has invented a new class of data centers that are mobile and eco-friendly. These “container-type immersion cooling small data centers” employ “single-phase immersion cooling” to reduce power consumption by 43% and lower the PUE below 1.07. GIGABYTE Technology drew from its years of experience in the telco sector to provide the R282-Z93 and R182-Z91 Rack Servers for KDDI to use as the management and GPU computing nodes in the data center. KDDI benefits from the servers’ powerful 3rd Gen AMD EPYC™ CPUs, the scalable, high-density configuration of NVIDIA® GPUs in small form factors, and the servers’ optimized compatibility with the liquid-based data center cooling solution. GIGABYTE’s participation in KDDI’s project is in line with GIGABYTE’s long-term CSR and ESG efforts, which are focused on working with global industry leaders to “Upgrade Your Life” with high tech while building a greener, more sustainable environment for our future.     KDDI Corporation, which ranks alongside SoftBank Group Corp. and NTT Docomo Inc. as one of Japan’s top three telecommunications companies, started work in July of 2020 on a new, experimental design for data centers that may set the trend for edge computing and “net zero” green computing. These modular, containerized micro data centers, which KDDI refers to as “container-type immersion cooling small data centers” (コンテナ型液浸スモールデータセンター), are expected to drastically reduce both the carbon footprint and the installation lead time of data centers.Learn More:《Visit GIGABYTE’s CSR Homepage to See How We are Giving Back to the World!》《Glossary: What is Data Center?》《Glossary: What is Edge Computing?》There are many benefits to such a design. For one thing, because these data centers are portable and convenient to install, they can quickly respond to a burst in demand in a specific location (such as an outdoor music festival—think Coachella) or an emergency situation (such as an earthquake). What’s more, since these data centers are environmentally friendly and sustainable, they can help us build the smart, interconnected world of tomorrow—without having an adverse effect on the environment.This second point is worth going deeper into. As our lives become more digitized, the international community is building more and more data centers. They are an indelible part of our modern IT infrastructure, because they contribute to every high tech invention you can think of, from extended reality (XR) and generative AI to self-driving cars and space exploration. However, data centers eat up a lot of energy. By some estimates, data centers consume 2% of the world’s electricity and emit roughly as much carbon dioxide (CO2) as the entire aviation industry. How to build new data centers without harming the environment or dumping more carbon into the atmosphere—that is the million-dollar question (or billion-dollar, if you adjust for inflation) that IT experts everywhere are grappling with.Learn More:《Glossary: What is IT?》《Glossary: What is Metaverse?》《ArchiFiction Achieves Naked-Eye 3D Virtual Reality with GIGABYTE Workstations》《GIGABYTE’s ARM Server Boosts Development of Smart Traffic Solution by 200%》《Lowell Observatory Looks for Habitable Exoplanets with GIGABYTE Servers》KDDI may have hit upon a solution: a revolutionary method of data center cooling known as “immersion cooling”. Immersion cooling is a liquid-based thermal management system that submerges the servers directly in a bath of dielectric, nonconductive coolant. Since this is more energy-efficient than air cooling, the data center’s PUE, which is its total energy consumption divided by its computing equipment’s energy consumption, can be lowered below 1.07, by KDDI’s estimates. That is a 43% reduction in electricity use when compared with air-cooled data centers, which may have an average PUE of around 1.7.《Glossary: What is PUE?》By adopting immersion cooling in its micro data centers, KDDI hopes to create a win-win situation that fulfills two important aspects of its corporate vision. There are two important reasons why KDDI is exploring possible new designs for data centers. The first is that such a design may contribute to the digital transformation (DX) of Japan. The second is to meet the carbon neutrality goals outlined in "KDDI GREEN PLAN 2030", which is the company’s roadmap to combating climate change, protecting biodiversity, and ushering in the circular economy.Learn More:《Visit GIGABYTE’s Dedicated Solutions Page to Learn All about Immersion Cooling》《How to Pick a Cooling Solution for Your Servers? A Tech Guide by GIGABYTE》 KDDI’s “container-type immersion cooling small data center” is a micro data center housed within a 12ft shipping container. The servers within are cooled with the revolutionary “single-phase immersion cooling” technology, which can help the data center reduce power consumption by 43% and lower the PUE below 1.07.   In fact, there are two types of immersion cooling. In the “two-phase” solution, the coolant undergoes a continuous cycle of vaporization and condensation within the sealed tank, and the heat is extracted from the vapors through a condenser at the top of the tank. In the “single-phase” solution, the coolant does not vaporize; instead, it is pumped to a coolant distribution unit (CDU) to remove the heat. GIGABYTE Technology, a leading brand of data center and server solutions, has already built a two-phase immersion cooling solution for a prominent IC foundry giant that leads the world in advanced process technology.Learn More:《Press Release: Building a Two-phase Immersion Cooling Solution for IC Foundry Giant, GIGABYTE Is Looking to Set New Standards for Modern Data Center》《As Good as Gold: Immersion Cooling Accelerates Oil Extraction》KDDI Corporation opted for single-phase immersion cooling. It invited GIGABYTE Technology to participate in its “container-type immersion cooling small data centers” project. GIGABYTE provided one R282-Z93 and two R182-Z91’s from its state-of-the-art line of Rack Servers to serve as the management and GPU computing nodes. Benefiting from GIGABYTE’s server products and expert knowledge, KDDI’s vision of “carbon neutral digital transformation” may be one step closer to reality.Learn More:《More information about GIGABYTE’s Rack Server》《Glossary: What is GPU?》《Glossary: What is Node?》     Going Green on the Edge with GIGABYTE’s Industry-Leading Rack Servers In Mitsubishi Heavy Industries’ Yokohama Hardtech Hub (YHH) in Yokohama, Japan, KDDI erected a 12ft shipping container measuring 3.66 meters in length, 2.44 meters in width, and 2.90 meters in height. Within, KDDI installed a micro-modular, rack-based immersion cooling system from GRC (Green Revolution Cooling) that offers 24 rack units (24U) of space for servers, and can cool up to 50kVA of heat. The coolant system is made by Mitsubishi Heavy Industries. The Japanese system integrator NEC Networks & System Integration Corp. (NESIC) also helped to set up KDDI’s innovative data center.Learn More:《See It Now! GIGABYTE Provides Single-Phase Immersion Cooling with GRC》《Glossary: What is Rack Unit?》Here is how single-phase immersion cooling keeps the data center’s temperature in check: during operation, heat generated by the servers is absorbed by the coolant, which is then piped to the CDU installed in an outward-facing closet partitioned into one end of the container. The heat exchanger in the CDU transfers the thermal energy from the coolant to a chilled water loop, which is kept cool by an external radiator. This is more energy-efficient than air or even direct-to-chip (D2C) liquid cooling. By KDDI’s estimates, when the data center is operating at its maximum capacity, the servers will use 50kVA of power, while the data center’s total power consumption will only be around 53.5kVA—therefore lowering the PUE below 1.07.《Glossary: What is Liquid Cooling?》 Unlike “two-phase immersion cooling” (which GIGABYTE also provides), “single-phase immersion cooling” cycles the coolant out to a CDU, which transfers the thermal energy to a chilled water loop. KDDI estimates that its prototype data center only consumes around 3.5kVA of energy for cooling, compared to 50kVA for computing.   In addition to being more energy-efficient, there are other advantages, as well. Because immersion cooling can support a higher density of servers, the entire system fits snugly within a 12ft container. Immersion cooling can help to protect the servers from the environment, whether it is high temperatures or salt and dusts in the air. Thanks to the modular design, a number of installation site restrictions can be eliminated; therefore, it is expected that the new data centers will be able to overcome the difficulties of space allocation and material delivery, making them more flexible, mobile, and space-saving.With the help of GIGABYTE and other suppliers, KDDI may achieve its goal of deploying micro data centers that are not only mobile and eco-friendly, but can also expedite edge computing across Japan. The adoption of immersion cooling is a valuable learning experience for KDDI’s IT experts, and it may become a role model that can be emulated by other ICT companies in Japan—because it represents the best of many important tech trends, such as carbon-neutral computing, edge computing, and small-scale data centers.“We are impressed that the GIGABYTE servers were able to adapt to our liquid-based cooling system so smoothly,” says Mr. Shintaro Kitayama, Assistant Manager and Server Infrastructure Engineer at KDDI. “GIGABYTE leveraged its past experience in the telco industry to overcome the challenges of immersion cooling, which is very beneficial to our project.”  GIGABYTE contributed to KDDI’s vision of a new class of container-type immersion cooling small data centers in two important ways:1. Suitable server products: the R282-Z93 and R182-Z91 are uniquely suited for the management and GPU computing nodes. This is due to the scalable, highly dense configurations of powerful CPUs and GPUs within easily manageable small form factors.《Glossary: What is Scalability?》2. Expert knowledge: the R282-Z93 and R182-Z91 are designed for air cooling, but GIGABYTE leveraged its years of experience with data center cooling solutions to modify and optimize the hardware and software for single-phase immersion cooling.     Server Products: Powerful Processors in Scalable, Highly Dense Small Form Factors The single R282-Z93 that GIGABYTE provided for KDDI contains dual AMD EPYC™ 7643 CPUs and triple NVIDIA® A100 Tensor Core GPUs in a 2U chassis, while the two R182-Z91’s contain AMD EPYC™ 7713 CPUs and NVIDIA® T4 Tensor Core GPUs in 1U chassis. Based on GIGABYTE’s years of experience providing enterprise server solutions for the telco sector, this combination of CPUs and GPUs (or more accurately, GPGPUs) in small form factors are eminently suitable for the management and GPU computing nodes in a micro data center.Learn More:《GIGABYTE Solutions: NVIDIA A100 Tensor Core GPU Offers Unprecedented Acceleration at Every Scale》《Glossary: What is GPGPU?》 GIGABYTE provided the R282-Z93 and R182-Z91 Rack Servers to KDDI, for use as the management and GPU computing nodes in its container-type immersion cooling small data center. The 3rd Gen AMD EPYC™ CPUs and NVIDIA® A100 and T4 GPUs contained within offer KDDI the processing power it needs to offer a variety of services for customers.   The synergy between CPUs and GPGPUs, which is sometimes referred to as heterogeneous computing, is crucial for a data center that supports the public network. Not only are there a large number of users connecting to the data center at all times, the sheer amount of data in graphical form—photos, videos, even data that may be used by computer vision—is staggering. While the 3rd Gen AMD EPYC™ processors already contain an impressive number of cores and threads, it is still necessary to boost CPU performance with the NVIDIA® A100 and T4 GPUs. PCIe Gen 4.0 is used to enable faster data transmission between the processors.Glossary:《What is Heterogeneous Computing?》《What is Computer Vision?》《What is Core?》《What is Thread?》《What is PCIe?》Mr. Kitayama explains, “We thought it necessary to supplement the CPUs with powerful GPUs to provide enough computing power on the data center and the edge site in a highly aggregated state, which will allow us to offer a rich variety of services for our customers. Immersion cooling opens the door to the most advanced GPUs, such as the A100, without encountering limitations in power consumption or cooling.”The form factors of the R282-Z93 and R182-Z91, which can support a highly dense and scalable configuration of CPUs and GPUs, is another reason why the GIGABYTE servers are suitable for micro data centers. Immersion cooling has made it possible for the telco giant to experiment with high-density computing, which refers to the aggregation of a lot of computing resources within limited space. KDDI is able to reduce the data center’s physical footprint without giving up an iota of computing power, all the while retaining the option to scale out if necessary. The small size of the data center also makes it more convenient to load onto the back of a truck, where it can be carted around Japan to the locations that need it the most.《Glossary: What is Scale Out?》     Expert Knowledge: Optimizing the Servers for Immersion Cooling Before the R282-Z93 and R182-Z91’s were shipped to Yokohama, the GIGABYTE team in Taiwan dedicated weeks of time to modifying and optimizing the servers for immersion cooling. Air-based cooling components, such as the fans, heat sinks, and air shrouds were removed. The servers’ temperature sensors, which were calibrated for the lower ambient temperatures of an air-cooled data center, were also replaced. GIGABYTE was happy to draw upon its experience with data center cooling solutions to help KDDI achieve the vision of a containerized small data center that utilizes immersion cooling.After the servers arrived in Japan, KDDI conducted a round of operation tests, performance tests, and heat resistance tests to evaluate the cooling effectiveness and structural integrity of the servers. This was done to see if the servers could maintain peak performance without running into issues. The configurable TDP (thermal design power) of the CPUs could go as high as 240W—this is why, in an air-cooled data center, the ambient temperature is usually limited to 30°C or 35°C. However, because immersion cooling is much more efficient at dissipating heat, a much higher ambient temperature may be acceptable. KDDI tested the whole set-up extensively to make sure the micro data center would be ready for field deployment.《Glossary: What is TDP?》 Before shipping the servers to Japan, GIGABYTE optimized the R282-Z93 and R182-Z91 for immersion cooling. GIGABYTE’s wealth of experience in the field of data center cooling, coupled with its in-depth understanding of the telco industry, enables it to help KDDI achieve the vision of micro data centers that utilize immersion cooling.   In the course of testing, KDDI spotted other details to consider when adopting immersion cooling. For example, KDDI discovered that ink on labels may smudge when it comes in contact with the oil-based coolant, and this may form a pollutant that may be detrimental to the cooling system. This example proves that there are many things to watch out for when optimizing the servers for immersion cooling. Fortunately, GIGABYTE was glad to share its treasure trove of expert knowledge, as well as its extensive line of server products which are compatible with air, liquid, and immersion cooling, to help KDDI achieve its vision of the container-type immersion cooling small data center.Learn More:《Waseda University Decodes the Storm with GIGABYTE’s Computing Cluster》《Using GIGABYTE, NIPA Cloud Soars Among CSP Giants in Thailand》     The Future of Data Centers: Mobile, Eco-Friendly, Immersion Cooling KDDI Corporation has embarked on a bold venture that may revolutionize the way we design and build data centers. The outlook is optimistic. In April of 2022, KDDI tested the immersion cooling solution in the KDDI Oyama Technical Center (KDDI Oyama TC) in the Tochigi Prefecture, about 90 kilometers north of Tokyo. The system was tested against the “High Availability” standards of Tier 4 data centers, as outlined by ANSI/TIA-942-A. It was also tested to see if immersion cooling may contribute to overcoming the problems of space limitations and heat dissipation and evaluate the operability for construction and maintenance. The hope was that the container-type immersion cooling small data center may eventually see widespread deployment across Japan.Learn More:《Glossary: What is High Availability?》《Glossary: What is 5G?》《Free Downloadable Tech Guide: How to Build Your Data Center with GIGABYTE?》“We are grateful for GIGABYTE’s active participation as a server manufacturer in our new cooling technology challenge, and for their provision of equipment and technical support,” says KDDI’s Mr. Masato Katou, Expert at Platform Technology Department, DX Solution Engineering Division, “As a manufacturer of IT equipment in the immersion field, KDDI hopes that GIGABYTE will lead the industry in the future with GIGABYTE’s advanced technological capabilities and know-how.”GIGABYTE Technology is pleased to receive KDDI’s kind words of commendation. GIGABYTE also anticipates the widespread adoption of data center cooling solutions that will help industry leaders reduce carbon emissions, achieve carbon neutrality, and move the world toward a greener and more sustainable “net zero” future. This is in line with GIGABYTE Technology’s long-term CSR, ESG, and SDGs-related efforts, as well as its vision: “Upgrade Your Life”, which is focused on utilizing innovative high tech to deliver better business results while building a smarter, better world of tomorrow.

상세보기

[GIGABYTE] CSR and ESG in Action: GIGABYTE Helps NCKU Train Award-Winning Supercomputing Team

2023.06.15

[GIGABYTE] CSR and ESG in Action: GIGABYTE Helps NCKU Train Award-Winning Supercomputing Team

GIGABYTE Technology is not only a leading brand in high-performance server solutions—it is also an active force for good when it comes to CSR and ESG activities. Case in point: in 2020, GIGABYTE provided four G482-Z50 servers to Taiwan’s Cheng Kung University. The servers were used to train a team of talented students, who went on to take first place in that year’s APAC HPC-AI Competition in Singapore. The parallel computing performance of the servers’ processors, the seamless connectivity between the servers, and the servers’ unrivalled reliability are the reasons why GIGABYTE servers are ideal for educating the next generation of supercomputing experts. GIGABYTE is happy to give back to society and contribute to human advancement through high tech solutions.     For veterans of the industry who are versed in server technologies, there is sometimes a tendency to get bogged down in specs and benchmark figures: these CPUs contain an x number of cores and threads more than the previous generation, this interconnect architecture transfers data at a rate of so-and-so many gigabytes per second, etc. These figures are important, but only because they serve a greater purpose: the advancement of human society through high tech solutions and superb processing power. As crucial as it is to develop the most advanced server products, it is also vital to share these resources with talented individuals, so that the best and brightest among us may have access to tools that match their potential.  Glossary:  《What is Core?》  《What is Thread?》GIGABYTE Technology, a world-renowned provider of server solutions with clients in such diverse sectors as cloud computing, data centers, edge computing, finance, healthcare, manufacturing, and more, is celebrated for its commitment to CSR (corporate social responsibility), ESG (Environmental, Social, and Corporate Governance) objectives, and Sustainable Development Goals (SDGs). Since 2001, the GIGABYTE Education Foundation has hosted the annual “GIGABYTE Great Design” competition, which helps students create their own inventions and compete in international events, such as the iF Design Award. In regards to sustainability, as far back as 2009, GIGABYTE made a commitment to reduce carbon emissions by 50% by 2030. GIGABYTE is on track to meet this goal by 2025—five years ahead of schedule. GIGABYTE has been recognized by Forbes Magazine as one of the “World's Best Employers”. GIGABYTE was also honored as one of Taiwan’s top 25 brands in the “Best Taiwan Global Brands” survey.Learn More:  《Glossary: What is Cloud Computing?》  《Glossary: What is Data Center?》 《Glossary: What is Edge Computing?》 《Press Release: GIGABYTE Honored in “Best Taiwan Global Brands” Survey》What happens when GIGABYTE expresses its commitment to CSR and ESG through its groundbreaking server solutions? We present for your perusal: the story of how Taiwan’s Cheng Kung University (NCKU) won the championship in the APAC HPC-AI Competition, with help from GIGABYTE’s G482-Z50 GPU Servers.Learn More:  《Glossary: What is HPC?》  《Glossary: What is AI?》 《More information about GIGABYTE’s GPU Server》     AI, Climate Change, COVID-Related Research: the Focus of APAC HPC-AI Organized by the HPC-AI Advisory Council (HPCAIAC) and National Supercomputing Centre (NSCC) Singapore, and sponsored by such industry heavyweights as Nvidia Corporation, the APAC HPC-AI Competition is an annual contest designed to bridge the gap between the cultivation of talent in universities and research centers, and the application of AI and HPC in the real world. By pitting the boundless potential of young and eager minds against supercomputing tasks with real-life implications, the contest aims to help students prepare for the ever-growing demand for higher computation performance, as well as the increasing complexity of research problems.“High performance computing and artificial Intelligence are the most essential tools fueling the advancement of science,” says Gilad Shainer, Chairman of the HPCAIAC. “[Our] mission is to foster the next generation of supercomputing leadership, and to help develop the next generation of platforms and knowledge.”Cheng Kung University has participated in the APAC HPC-AI Competition since its conception. In 2020, the Department of Engineering Science at NCKU assembled two teams of around six students each to participate in the contest. The students were mentored by Chair Professor Chi-Chuan Hwang; they were also coached by Research Assistant Chao-Chin Li, whom the students called Michael. Under the guidance of Chair Professor Chi-Chuan Hwang (center) and Research Assistant Michael Li (left), and with the help of four GIGABYTE G482-Z50 GPU Servers, the supercomputing team representing Taiwan’s Cheng Kung University took first place in the 2020 APAC HPC-AI Competition.   As part of the contest in 2020, the HPCAIAC and NSCC assigned four challenging tasks. The first task was centered on AI applications: contestants were asked to surpass international NLP (natural language processing) records by using BERT, a machine learning technique developed by Google. The second task was focused on HPC: participants must attempt to break climate simulation world records by using NEMO, a modelling framework for research activities and forecasting services in ocean and climate sciences. The third and fourth tasks were geared towards bio-science simulations and innovations as part of the global effort to combat COVID-19. Students were asked to use NAMD, a parallel molecular dynamics code, to recreate the molecular structure of a common virus in the shortest time possible. They were also tasked with proposing an HPC or AI application that could potentially be used against COVID-19.Learn More:《Glossary: What is Natural Language Processing?》《Glossary: What is Machine Learning?》《Spain’s IFISC Tackles COVID-19 and Climate Change with GIGABYTE Servers》Taking into account GIGABYTE’s dedication to CSR and ESG objectives, and GIGABYTE’s position as an industry-leading server solutions brand, it should come as no surprise that GIGABYTE was glad to support the NCKU’s supercomputing team. Four G482-Z50 GPU Servers were provided by GIGABYTE. With these top-tier server products, students were able to practice using the most advanced supercomputing techniques to tackle some of the world’s most critical issues.“GIGABYTE provided us with server products on par with any server brand you could care to name, because they believed in our effort to nurture the next generation of HPC and AI experts,” says Michael Li, who worked closely with the students and developed a fraternal bond with many of them. “Winning awards in an international competition is about more than just the university’s prestige or our students’ academic careers. It is about cultivating supercomputing geniuses who can give back to society and contribute to the betterment of all humankind.”     The Benefits of GIGABYTE Servers: Parallel Computing, Connectivity, Reliability The servers were delivered to the NCKU’s campus in Tainan, and set up in a lab. The supercomputing team connected them to each other to form a computing cluster—an interconnected network of servers utilizing distributed computing technology to deliver performance on the scale of supercomputers. The software tools designated by the HPCAIAC and NSCC were installed on the cluster. Then, under the patient tutorage of Professor Hwang and Coach Li, these plucky university students got to work trying to shatter world records.Learn More:《Glossary: What is Computing Cluster?》《Glossary: What is Distributed Computing?》《Tech Guide: What is Cluster Computing and How Can GIGABYTE Help?》The students spent every moment of free time they had outside of class, including evenings, weekends, and holidays, to practice running the supercomputing programs on the GIGABYTE servers. First, the students would pore over theoretical computer science studies to look for the latest hypothetical methods that may help them achieve a breakthrough. They would then sound out the professor and the coach on their proposal. If they seemed to be on the right track, they would begin tweaking parameters in the software and running tests on the servers to see if their theory panned out. Sometimes they would achieve a quantum leap forward, and Coach Li would take them out for a night on the town to celebrate. Sometimes an experiment would end in abject failure, and it would be back to the drawing board.In the weeks and months leading up to the big day of the competition, the students achieved breakthroughs that would not only propel them past existing records, but also outmatch all the other top universities competing in the contest. Throughout the entire grueling process, the GIGABYTE servers were the students’ steadfast, unwavering companions, always ready to help test the latest configuration of software parameters. Long before the students emerged as world champions in the 2020 APAC HPC-AI Competition, they achieved their greatest triumphs on the four GIGABYTE G482-Z50 servers in the computer lab on the NCKU campus.The supercomputing team summarized that the GIGABYTE servers offered them three key benefits, which ultimately helped the students break world records and take home the championship:1. Incredible parallel computing performance, delivered through a combination of advanced CPUs and GPUs.2. Seamless connectivity between the servers, thanks to communications standards that offer high throughput and very low latency.3. Unrivalled reliability thanks to high availability features, which allowed the servers to operate continuously during the months leading up to the contest.Glossary:《What is Parallel Computing?》《What is GPU?》《What is High Availability?》 GIGABYTE helped the NCKU supercomputing team by providing them with four G482-Z50 GPU Servers. The incredible performance delivered by a combination of CPUs and GPGPUs, the seamless connectivity between the servers, and the inbuilt high availability features—all these qualities were of great assistance to the students. Benefit #1: Advanced CPUs and GPGPUs Offering Parallel Processing Capabilities In the quest for world-first scientific breakthroughs, using the right tools for the job can make all the difference. Some computing workloads are better suited for the complex, polymathic capabilities of the central processing unit; others run much more quickly and efficiently on general purpose graphics processing units (GPGPUs). The organizers of the APAC HPC-AI Competition realized this, and so they stipulated that certain problems must be solved with certain types of processing units. The ideal server solution, then, would need to support both advanced CPUs and GPGPU accelerators; it should also be able to run the same task on multiple processors to achieve parallel computing.《Glossary: What is GPGPU?》GIGABYTE servers are designed to do just that. They support the most advanced CPU options, such as Intel® Xeon® Scalable, AMD EPYC™, and the Ampere® Altra® series, which is based on the ARM architecture. In the case of the G482-Z50, there is also support for a highly dense configuration of up to ten PCIe Gen 3.0 GPGPU cards. Each of the server’s dual CPUs can be connected to five GPGPU cards through a PCIe switch, which minimizes the communication latency between the GPGPUs. The supercomputing team at NCKU outfitted their GIGABYTE servers with NVIDIA accelerators. The G482-Z50 is also fully compatible with other options, such as the AMD Instinct™ MI100.Learn More:《GIGABYTE’s Complete List of Intel® Xeon® Scalable Servers》《GIGABYTE’s Complete List of AMD EPYC™ Servers》《GIGABYTE’s Complete List of Ampere® Altra® Servers》《Glossary: What is PCIe?》     Benefit #2: High Throughput, Low Latency Connectivity within the Cluster As is common in HPC applications, the four servers were connected to each other to form a cluster. To link the servers, the NCKU team used 100 Gbps InfiniBand (IB) switches, which are noted for their high throughput and low latency qualities. GIGABYTE servers are also compatible with other networking standard, such as Ethernet, which supports UTP and fiber optic cables.The students used OpenZFS for file storage and Open MPI for communication. Since both of them are open-source, the NCKU team installed the Linux operating system on the GIGABYTE servers. OpenZFS functions like an advanced version of Network-Attached Storage (NAS), in that the files are distributed among all the servers in the cluster, rather than on a single server. This drastically improves the read/write speeds when storing or retrieving data. Open MPI was chosen because it allowed the students to access the entire cluster through any one of the four servers; it automatically distributed tasks to available computing resources to achieve optimal performance. The G482-Z50 is compatible with Red Hat Enterprise Linux, SUSE Linux Enterprise Server, and a wide range of other operating systems.Learn More:《Glossary: What is NAS?》《Setting the Record Straight: What is HPC? A Tech Guide by GIGABYTE》     Benefit #3: Continuous, Reliable Operations Thanks to High Availability Features Throughout the many months before the competition, the team worked tirelessly to crack the supercomputing puzzles. They ran batches of tests on the GIGABYTE servers, working days, nights, weekdays, weekends—which meant the servers got no rest at all. Thankfully, this did not constitute a problem, because GIGABYTE’s server solutions come with a treasure trove of high availability (HA) features.First and foremost is Smart Crises Management and Protection (SCMP), a GIGABYTE-patented feature that is deployed in servers without a fully redundant power supply unit (PSU) design. In the event of a faulty PSU or overheating, SCMP forces the CPU to enter ultra-low power mode. This reduces the power load, which protects the system from unexpected shutdowns, while avoiding component damage or data loss.Smart Ride Through (SmaRT) is another feature that is implemented on all GIGABYTE servers. In the event of a power outage, the system will manage its power consumption (known as throttling) while reducing the power load. Capacitors within the PSU can provide power for ten to twenty milliseconds, which is enough time for a backup power source to come online.Last but not least, the dual ROM architecture guarantees that, in the unfortunate event that the ROM storing the BIOS and BMC fails to boot, the system will reboot with the backup BIOS and/or BMC. Once the primary BMC is updated, the ROM of the backup version will automatically synchronize. The BIOS can be updated based on the firmware version.     The Result: Shattering Records and Claiming the Crown in the Contest On the big day of the competition—October 15th, 2020—everything that the students learned using the GIGABYTE servers was put to the test. In the end, hard work paid off. NCKU came in first, beating other prestigious research universities in the Asia-Pacific region, such as the University of New South Wales (UNSW) in Australia, Nanjing University (NJU) in China, and Singapore’s own Nanyang Technological University (NTU). Some of the NCKU team’s results even shattered world records at the time. For example, in the first part of the competition, which asked contestants to break NLP records using BERT, NCKU achieved an accuracy of 87.7%. This was higher than what had been achieved by the University of California, San Diego (87.2%) and Stanford University (87.16%).“Our collective futures greatly depend on nurturing every student’s potential, especially in times of adversity such as these. The commitment and resilience demonstrated by this year’s competing teams reminds us all that we need to meet challenges head-on and be flexible in adapting to the new normal,” says Associate Professor Tan Tin Wee, Chief Executive at NSCC.NCKU would go on to win big again in the 2021 APAC HPC-AI Competition. Professor Hwang says, “We would like to thank the College of Engineering and the Department of Engineering Science at NCKU, as well as GIGABYTE for their support.”GIGABYTE Technology is glad to have provided the servers that helped to educate the next generation of HPC and AI experts in Taiwan. These world-class servers have not only proven capable of tackling problems related to some of the 21st century’s most pressing issues—namely artificial intelligence, climate change, and COVID-19—they have also shown that as long as we are willing to invest in the younger generation, humanity has a chance to triumph against adversity. This is in keeping with GIGABYTE Technology’s unwavering commitment to its CSR and ESG goals, as well as the GIGABYTE motto: “Upgrade Your Life”, which is a sincere belief that high tech solutions can help us build a better world.

상세보기

[GIGABYTE] Semiconductor Giant Selects GIGABYTE’s Two-Phase Immersion Cooling Solution

2023.06.15

[GIGABYTE] Semiconductor Giant Selects GIGABYTE’s Two-Phase Immersion Cooling Solution

GIGABYTE Technology has built a “two-phase immersion cooling solution” for one of Taiwan’s foremost semiconductor giants, to be used in its sustainable, future-proof “green HPC data centers”. Not only does the solution boost the performance of HPC processors by over 10%—which is crucial for the nanometer process technology used in IC foundries—it also reduces data center power consumption by 30% and lowers PUE below 1.08, which turns it into a role model of green computing that may be replicated in data centers around the world. This exemplary project demonstrates how GIGABYTE can support its clients’ CSR, ESG, and SDGs-related initiatives, and how GIGABYTE is working tirelessly to “Upgrade Your Life” with high tech while protecting our environment.     One of the most important changes to sweep over the world is the international community’s determination to combat climate change and achieve carbon neutrality. This is proven by the 2015 Paris Agreement and the Glasgow Climate Pact in 2021.  The new epoch of environmental consciousness is having a profound effect on the way companies do business: from the way they invest in new technology, to the way they develop products and services, to how they approach the market. Failure to demonstrate the company’s commitment to CSR (corporate social responsibility), ESG (environmental, social, and corporate governance), and SDGs (Sustainable Development Goals) will not go unnoticed by consumers and investors—and even less so by government regulators and lawmakers, who are beholden to the public.  As one analyst has so succinctly put it: “Every enterprise is on the pathway to carbon neutrality, whether they have decided this for themselves or not.”One exemplary industry leader that has demonstrated remarkable initiative in achieving carbon neutrality is the prominent IC foundry giant that leads the world in advanced process technology. This titan of the industry utilizes high performance computing (HPC) to manufacture semiconductor chips on the nanometer scale with a high degree of precision and excellent yield. At the same time, the IC foundry giant adheres to the ESG vision of uplifting our society through the reduction of carbon emissions, the fulfillment of its climate pledge, and the achievement of “net zero”. Therefore, the industry leader has devoted considerable resources to finding a way to expand its data centers’ HPC capabilities in a sustainable and eco-friendly way.Glossary:《What is High Performance Computing (HPC)?》《What is Data Center?》In October of 2020, the IC foundry giant reached out to GIGABYTE Technology and other industry leaders to build a “two-phase immersion cooling data center”. This revolutionary new solution will incorporate a type of data center cooling technology known as “two-phase immersion cooling” to improve chip computing performance and reduce power consumption. Not only will this be beneficial to the state-of-the-art semiconductor manufacturing process, it will also help the IC foundry giant usher in a new era of “green HPC data centers,” with the end goal being to reduce 400 million KW-HR per year starting from 2030.Learn More:《Press Release: Building a Two-phase Immersion Cooling Solution for IC Foundry Giant, GIGABYTE Is Looking to Set New Standards for Modern Data Center》《Learn All about Immersion Cooling on GIGABYTE’s Dedicated Solutions Page》     Benefit #1: Eco-Friendly Solution Reduces Power Consumption by Up to 30% and Lowers PUE Below 1.08 In the IC foundry giant’s semiconductor fabrication plant (commonly called a fab or foundry) in Hsinchu, Taiwan, GIGABYTE Technology worked with other industry leaders—including the power and thermal management solutions provider Delta Electronics, the networking and communications expert Accton Technology, and the American conglomerate 3M—to build the “two-phase immersion cooling solution”. The two-phase immersion cooling solution relies on the natural evaporation and condensation of its dielectric coolant, which has a low boiling point of 50°C, to transfer heat from the electronic components to the condenser at the top of the tank. This is a superbly energy-efficient system that reduces power consumption and lowers PUE.   From the outside, the solution looks like a large white metal cabinet, measuring 1.8 meters wide, 1.1 m. deep, and 2.5 m. high. Two gloveboxes occupy the upper half of the cabinet. The lower half is where the magic happens: 96 custom-made server motherboards provided by GIGABYTE are installed in a sealed tank filled partially with dielectric, nonconductive liquid coolant. The coolant bath chills the electronic components directly, which is a remarkable advancement compared to conventional air-cooling or direct-to-chip (D2C) liquid cooling.Learn More:《Take a Closer Look at GIGABYTE’s Two-Phase Immersion Cooling Solution》《More information about GIGABYTE’s Server Motherboards》《Glossary: What is Liquid Cooling?》Two-phase immersion cooling is different from single-phase immersion cooling in one distinctive way. In the single-phase system, warm coolant is cycled out to a coolant distribution unit (CDU) to remove the heat via a cooling tower, a dry cooler, or a chiller. The two-phase system achieves the same effect through a recurring cycle of vaporization and condensation within the same sealed tank. Because the coolant that is used has an ultra-low boiling point of just 50°C, heat from the servers will naturally cause the coolant to evaporate and transfer heat out of the bath. The vapors are then chilled by a lid or coil condenser at the top of the tank, allowing the condensation to flow back into the coolant bath to restart the cycle.The IC foundry giant is able to reduce the data center’s total power consumption by up to 30% and lower PUE (Power Usage Effectiveness) from 1.35 to below 1.08 by adopting two-phase immersion cooling in its green HPC data center. Since a lower PUE denotes greater energy-efficiency, this is a major stride forward. Overall, the two-phase immersion cooling solution has a high cooling capacity of 100KW, which is recommended for the powerful processors which are used in HPC.《Glossary: What is PUE?》The IC foundry giant has achieved a breakthrough in more ways than one. On the one hand, adopting more energy-efficient cooling solutions will go a long way toward future-proofing the semiconductor manufacturing process and making it less susceptible to power outages or brownouts. On the other hand, green computing has become increasingly important as the world gets hotter and carbon emissions become more heavily taxed. Building eco-friendly green HPC data centers with smaller carbon footprints may help the IC foundry giant fulfill SDGs number 7 (“ensure access to affordable, reliable, sustainable and modern energy for all”) and number 9 (“build resilient infrastructure, promote inclusive and sustainable industrialization and foster innovation”), as outlined by the United Nations.According to a statement released by the IC foundry giant, it is working proactively to make the two-phase immersion cooling solution an industrial standard, so that it may be promoted to and utilized by other companies in the supply chain to help achieve greater sustainability in the future.     Benefit #2: Highly Dense Configuration of HPC Processors Gets 10% Boost in Computing Performance The second major advantage of using two-phase immersion cooling is how it enables processors to perform at peak capacity. The reason is as follows: in a standard air-cooled data center or server room, the set point—which is the room’s baseline temperature—is kept at around 30°C to 35°C, and CPU performance is “throttled” at about 80%, to keep the chips from overheating. Because immersion cooling is more effective at removing heat very quickly, it is possible to let both the ambient temperature go up and the CPU performance cap out, resulting in chip computing performance being improved by 10% or more. It goes without saying that getting better performance out of the green HPC data center is hugely beneficial for the IC foundry giant.《Glossary: What is Server Room?》 The 96 GIGABYTE server motherboards installed in the two-phase immersion cooling solution run on AMD EPYC™ processors. Because immersion cooling is more effective at removing heat, the CPUs can operate at maximum capacity, resulting in a performance boost of over 10%.   The 96 GIGABYTE server motherboards installed in the two-phase immersion cooling solution run on AMD EPYC™ processors, which offer impressive core and thread counts that are ideally suited for HPC-related tasks. Such a large number of nodes, even if they were installed in GIGABYTE’s industry-leading H-Series High Density Servers (such as the H262-Z6A, which packs four nodes in a 2U form factor), would take up at least 48 rack units of space—and that is not considering the networking hardware that supports the computing and storage nodes in a data center. Not only does two-phase immersion cooling make a highly dense configuration of server components possible, it also removes unnecessary air-cooling components, such as fans, heatsinks, and heat shrouds from the server design. This helps to free up more space, reduce the chances of equipment failure, and eliminates temperature variations and component degradation that may result from the vibration of the moving parts.Learn More:《More information about GIGABYTE’s High Density Server》《Glossary: What is Core?》《Glossary: What is Thread?》《Glossary: What is Node?》《Glossary: What is Rack Unit (RU or U)?》In the first quarter of 2022, the IC foundry giant officially unveiled its two-phase immersion cooling solution to the public. The GIGABYTE team was there every step of the way to ensure the industry leader’s groundbreaking vision came to fruition. The IC foundry giant has commended GIGABYTE for its industry-leading research and development capabilities, its fast response time when responding to customer queries, and its ability to effectively put plans into action.The expert knowledge gained by GIGABYTE through the process will be beneficial for future applications in other industries, such as artificial intelligence, cloud computing, edge computing, FinTech, and machine learning—basically, any sector that must contend with the thermal management dilemma posed by increasingly powerful CPUs and GPUs that generate more and more heat.Glossary:《What is 5G?》《What is Artificial Intelligence?》《What is Cloud Computing?》《What is Edge Computing?》《What is Machine Learning?》《What is GPU?》GIGABYTE Technology is glad to contribute to the semiconductor industry and protect our environment by designing and building the eco-friendly and future-proof HPC data center that incorporates two-phase immersion cooling. It is hoped that, since Taiwan is an exemplary tech hub and the world leader in semiconductor production, the solution selected by the IC foundry giant may become the role model that the global supply chain emulates. The continuous effort to help enterprises make headway toward carbon neutrality, as well as fulfill their CSR, ESG, and SDGs-related commitments, is in keeping with GIGABYTE’s corporate vision: “Upgrade Your Life” with innovative high tech solutions that will deliver better business results while building a greener, more sustainable world of tomorrow.

상세보기

[GIGABYTE] NCHC and Xanthus Elevate Taiwanese Animation on the World Stage with GIGABYTE Servers

2023.06.15

[GIGABYTE] NCHC and Xanthus Elevate Taiwanese Animation on the World Stage with GIGABYTE Servers

Created by Greener Grass Production, the Taiwanese sci-fi mini-series “2049” made its debut on Netflix and various local TV channels. The animated spin-off “2049+ Voice of Rebirth”, crafted by Xanthus Animation Studio, premiered on the streaming service myVideo. The CGI show was created with the NCHC Render Farm’s GIGABYTE servers, which employ top-of-the-line NVIDIA® graphics cards to empower artists with industry-leading rendering capabilities. The servers can take on multiple workloads simultaneously through parallel computing, and they boast a wide range of patented smart features that ensure stability and availability. With all it has going for it, “2049+ Voice of Rebirth” may garner enough attention to become the breakout hit that will introduce Taiwanese animation to international audiences.     A futuristic metropolis shrouded in a green haze, viewed as if through a glass darkly, where larger-than-life holograms out of a Philip K. Dick novel loom above the cityscape. A yawning chasm between ultramodern skyscrapers and the imposing figures of ancient temple guardians, through which the vortex of everyday life ebb and flow, reminiscent of the style of Japanese director Mamoru Oshii. All this rendered in vibrant, cel-shaded CGI, crafted with the polish and sophistication of a theatrical release that’s made by a world-class animation studio.These are scenes from the computer-animated feature “2049+ Voice of Rebirth”, produced by the Taiwanese company Xanthus Animation Studio. Founded in 2004 by a team of kindred spirits, Xanthus is the creative force behind the popular Taiwanese children’s animated series “Yameme”, as well as a frequent collaborator in global franchises such as “Final Fantasy” and “Monster Strike”. For many years, Xanthus worked with Taiwan’s award-winning Greener Grass Production to launch an animated spin-off of the latter’s science fiction mini-series “2049”, which debuted on Netflix and other platforms in October 2021. “2049+ Voice of Rebirth” featured original music provided by Taiwan’s famous record label HIM International Music Inc., and it premiered on myVideo, a streaming service owned by the telecom giant Taiwan Mobile. The story focused on Taiwan in the not-so-distant future of 2049, and how artificial intelligence would affect pressing social issues such as affordable housing.《Glossary: What is Artificial Intelligence?》The pairing of a star-studded live-action TV show with an animated companion piece is a bold new experiment for Xanthus. With any luck, it may be the breakout hit that introduces Taiwanese animation to the international market. “Taiwanese animation has the talent and technology to make a splash on the world stage,” says Bruce Yao, CEO of Xanthus Animation. “Our animation techniques surpass Southeast Asian or even European competitors. Taiwanese animators are often headhunted by Chinese digital media companies. Now, with streaming platforms like Netflix, HBO GO, and Disney Plus, there’s a golden opportunity for Taiwanese animated shows to attract a wider audience overseas.”The fact of the matter is, animation has a long and lauded history in Taiwan. Starting in the 1970s, the Taiwanese animation company Wang Film Productions (also known as Hong Guang Animation) did outsourcing work for big American conglomerates like Hanna-Barbera, Warner Bros., and Walt Disney, collaborating on major projects such as “Tarzan” and “The Lion King”. Another animation studio, CGCG Inc., was famous for its work on the “Star Wars” and “How to Train Your Dragon” franchises; purportedly, George Lucas himself came to Taiwan to visit CGCG. More recently, Moonshine Animation gained critical acclaim for its use of GPU servers and artificial intelligence to render authentic, life-like animation.Learn More:《More information about GIGABYTE's GPU Server》《Moonshine Animation: Applying Cutting-Edge AI & VDI Technologies with GIGABYTE Servers》Yet, the industry as a whole has had its ups and downs. Taiwanese animation has generally struggled to transition from doing outsourcing work to creating their own intellectual properties. One commonly cited reason is the limited size of the domestic market. Another hurdle is the lack of capital to finance blockbusters. Last but not least, Taiwan faces stiff competition in the international arena, where American franchises dominate the theaters, while animated shows are the forte of Japanese studios. How to emerge on the world stage despite these limitations—similar to how Korea has succeeded with “K-pop” and “K-dramas”—is a question for all aspiring Taiwanese animators.The industry veterans at Xanthus think they may have a solution. “No one had even heard of Pixar before it revolutionized the industry with its first computer-animated film, ‘Toy Story’ in 1995. Exciting new technology, coupled with easy access to the market via streaming services, may be what Taiwan needs to make a comeback,” says Marketing & Planning Director Mason Yao, who is Bruce’s younger brother.To infuse its magnum opus with the creative spark that may capture the imagination of a wider audience, Xanthus called upon the services of Taiwan’s National Center for High-performance Computing (NCHC), as well as its crowning jewel—the Render Farm powered by GIGABYTE’s server solutions.Learn More:《NCHC Builds Render Farm with GIGABYTE Servers: Blending Art and Imagination with High Performance Computing》《Glossary: What is Render Farm?》《Glossary: What is HPC?》     NCHC Provides Tech, Nurtures Talent to Enhance Taiwanese Animation As part of the National Applied Research Laboratories (NARLabs), the NCHC built the Render Farm to provide high performance computing (HPC) and cloud computing services for Taiwan’s content creators, such as animators and filmmakers. It hosts the annual “HPC NCHC Animation Challenge” to encourage the younger generation to enter the animation industry. In this way, the NCHC hopes to help Taiwanese animation maintain its edge in terms of technology and talent, while reducing the barriers to entry by offering a rendering platform that’s accessible to professionals and amateurs alike.Learn More:《GIGABYTE Reaffirms Its Commitment to Tomorrow’s Talent at 9th HPC NCHC Animation Challenge》《Glossary: What is Cloud Computing?》 “2049+ Voice of Rebirth”, the animated magnum opus by Xanthus Animation Studio, hopes to wow viewers with its sophisticated CGI, which was created with the help of the NCHC Render Farm. Streaming platforms like Netflix may be the springboard that will catapult Taiwanese animation to international success.   By tapping into the Render Farm’s incredible processing power, Xanthus has crafted an animated show filled with dazzling imagery and poignant artistry, worthy of being the companion piece to the acclaimed “2049” mini-series. “2049+ Voice of Rebirth” evokes the style and ethos of the cyberpunk genre, with design choices and story beats paying homage to classics such as “Blade Runner” and “Ghost in the Shell”. At the same time, it recounts a distinctively Taiwanese narrative, touching upon the themes of folk religion and discussing pervasive societal issues, such as urban redevelopment and grassroots activism. The result is a refreshing take on the near-future sci-fi genre that manages to be at times bewildering and fantastical, while remaining firmly rooted in realism and radiating genuine empathy for its flawed and complex human characters.Artistically, Xanthus was able to walk a fine line between fact and fiction by utilizing scenes and settings from the real world as much as possible. To recreate the area of Nanjichang in central Taipei, where the story takes place, the Xanthus team used point cloud technology to scan and render the century-old community. The character models were intentionally converted from 3D to 2D, and then lit with normal mapping to achieve an authentic cyberpunk “feel”. The textures are a substantial upgrade over Xanthus’s more child-oriented programs to appeal to an older audience. As a result, “2049+ Voice of Rebirth” has the production value of a world-class theatrical release.《Glossary: What is Point Cloud?》A lot of this was made possible by the NCHC Render Farm and its array of GIGABYTE servers. “Our production capacity was effectively doubled by using the Render Farm; even big, flashy sequences could be animated in a relatively short time. Not to mention the quality and reliability—we have rented services from other render farms before, and have even tried building our own. Needless to say, nothing compares to what the NCHC has to offer us,” says Bruce Yao. Xanthus utilized “point cloud” technology to recreate the historical area of Nanjichang in central Taipei. They imagined a not-so-distant future where AI applications may intersect with social issues such as affordable housing and urban renewal. Will we use technology to create a better tomorrow, or will we languish in the dystopian future often envisioned in the cyberpunk genre? The Render Farm’s Secret Weapon: Server Solutions by GIGABYTE Owing to its cluster of GIGABYTE G-Series GPU Servers, the NCHC Render Farm is able to give content creators access to cutting-edge animation techniques, while remaining cost-efficient, reliable, and available. Specifically, the NCHC chose the G191-H44, which features four GPU card slots in a slim 1U chassis, and the G481-S80, which can fit up to eight NVIDIA® Tesla® V100 SXM2 modules in a 4U chassis. Both models run on 2nd Generation Intel® Xeon® Scalable processors.Learn More:《Glossary: What is Computing Cluster?》《More information about GIGABYTE's Intel Xeon Scalable Servers》Over the years, the NCHC Render Farm has offered technological support to Taiwan’s content creators, culminating in the premiere of over five hundred animated or live-action films. Case in point: the mini-series “Seqalu: Formosa 1867”, an epic historical drama conceived by Taiwan’s Public Television Service (PTS), features special effects created by the Render Farm. Ancient ships, storms on the open sea, and other cinematic elements were rendered to great effect. Another recent collaboration with Xanthus is “Yameme Team vs. Math Demon”, Taiwan’s first educational 3D animated feature about arithmetic, which caters to tens of thousands of parents and children. The industry-leading processing power of the GIGABYTE servers have no trouble keeping up with the ever-evolving demands of animators and filmmakers.For “2049+ Voice of Rebirth”, the G191-H44 and G481-S80 offered two key perks that helped the artists knock it out of the park:1. The combination of CPU and GPU resources through heterogeneous computing to enable the latest animation techniques, such as motion capture, point cloud, and virtual reality. Parallel computing and low-latency performance allow multiple users to access the Render Farm’s services simultaneously.2. Patented smart features related to power and cooling that ensure system stability, which in turn guarantees the Render Farm’s services are always available to its users.Glossary:《What is GPU?》《What is Heterogeneous Computing?》《What is Parallel Computing?》 GIGABYTE’s G481-S80 can house up to eight NVIDIA® V100 GPUs. GPU performance is enhanced by the NVIDIA® NVLink™ interconnect architecture, which improves the bandwidth and scalability of a server with multiple graphics cards. The result is maximized throughput and optimized GPU-to-GPU acceleration. NVIDIA® GPUs and Parallel Computing Enable Creative Rendering Methods The G191-H44 and G481-S80 are powered by Intel® Xeon® Scalable processors; they also feature additional slots for GPU cards, which are ideal for rendering CGI. The servers employ NVIDIA® V100 and NVIDIA® Quadro® RTX 8000 GPUs—some of the best options on the market. To use the G481-S80 as an example, rendering can be done forty times faster when all eight of its sockets are occupied by NVIDIA® V100 GPUs. Avant-garde animation techniques, such as motion capture, point cloud, and virtual reality, can be utilized with this set-up. Because of this, when independent studios like Xanthus connect to the GIGABYTE servers at the NCHC Render Farm, they experience an efficiency boost like never before.What’s more, the performance of these servers is greater than the sum of their parts. The G481-S80 is equipped with NVIDIA® NVLink™, an interconnect architecture designed to maximize throughput and optimize GPU-to-GPU acceleration in a system with multiple graphics cards. NVLink™ can increase the scalability, bandwidth, and number of links between graphics cards in the SXM2 form factor. To use the NVIDIA® Volta™ Series as an example, with NVIDIA® NVLink™, a single GPU can support up to six links, each transmitting data at the speed of 25 gigabytes per second in both directions. That’s a total bandwidth of 300 gigabytes per second—ten times the average bandwidth of PCIe 3.0. Such high-density parallel computing capabilities and low-latency performance make these servers the ideal HPC solutions for servicing multiple users simultaneously.Glossary:《What is Scalability?》《What is PCIe?》It’s worth noting that GIGABYTE’s G-Series GPU Servers are incredibly versatile and can be used in other sectors besides animation. To use the G191-H44 as an example, its industry-leading dense configuration of four GPUs in a 1U chassis is ideal for workloads related to AI, machine learning, and deep learning, as well as technical computing applications in the fields of financial services, life sciences, and energy exploration. It has been validated for use with NVIDIA® NGC™ and can fully support cloud software provided by NVIDIA, making it ideal for data scientists and researchers who wish to create a multi-tenant AI algorithm training environment. The G191-H44 has also been officially recognized as an NVIDIA® EGX™ Platform; these are GPU-accelerated platforms designed for edge computing, which can be deployed as part of a 5G cellular base station or micro data center (MDC), providing real-time inferencing and data processing on the edge of the network.Learn More:《Glossary: What is Machine Learning?》《Glossary: What is Deep Learning?》《Glossary: What is Edge Computing?》《GIGABYTE’s GPU Servers Help Improve Oil & Gas Exploration Efficiency》 With four GPU card slots in a 1U chassis, the GIGABYTE G191-H44 boasts industry-leading GPU density that makes it ideal for workloads related to artificial intelligence, machine learning, and deep learning, as well as HPC-related applications in a render farm used by animators and filmmakers. Patented Smart Features Ensure Reliability and Availability for Users The team at Xanthus has observed that although the Render Farm doubled their production capacity, the servers are clearly capable of handling much larger workloads. “I would not be surprised if the Render Farm could provide every content creator with just about as much capacity as they could ask for,” muses Bruce Yao.The fact of the matter is, the GIGABYTE servers had been put through their paces for precisely this purpose. During the validation phase, the Render Farm conducted a battery of rigorous stress tests on the servers. A 4K ultra-high-definition video was played at the NCHC’s headquarters in Hsinchu, and then a server located in the Taichung branch was connected to begin processing the video. The video was then accessed and broadcasted on up to a hundred client devices, simulating a scenario in which the Render Farm would be pushed to its maximum capacity. The GIGABYTE servers passed with flying colors. They are fully capable of tackling many massive workloads at the same time, making it a breeze for different animation studios to use the Render Farm to complete their projects.The G191-H44 and G481-S80 have ways of ensuring smooth and reliable operations for users connected to the Render Farm. The secret lies in the servers’ array of patented smart features, which can take care of everything from temperature control to power efficiency.     ● Automatic Fan Speed Control To achieve an optimal balance between cooling and power efficiency, Automatic Fan Speed Control is enabled in the GIGABYTE servers. This means the speed of individual fans will adjust automatically according to feedback from temperature sensors monitoring key components, such as the CPU, GPU, DIMM, memory, and storage.《Glossary: What is DIMM?》     ● Cold Redundancy To take advantage of the fact that a PSU with a higher load will run at greater power efficiency, GIGABYTE servers with N+1 redundancy are equipped with a power management function known as Cold Redundancy. When the total system load falls below 40%, the system will automatically place one PSU in standby mode, resulting in a 10% improvement in power efficiency.     ● Smart Crises Management and Protection (SCMP) SCMP is a patented feature included in GIGABYTE servers without a fully redundant power supply unit (PSU) design. In the event of a faulty PSU or overheating, the SCMP function will force the CPU to go into an ultra-low power mode to reduce the power load, protecting the system from unexpected shutdowns and avoiding component damage or data loss. GIGABYTE’s G-Series GPU Servers help animators merge creativity with technology so they can bring fantastical visions to life. Powerful CPU and GPU resources can render epic CGI sequences in a jiffy, while parallel computing capabilities ensure multiple users can access the rendering services simultaneously. Working with the NCHC, GIGABYTE Upgrades Taiwan’s Animation Industry “A project like the Render Farm was made possible by the public sector, so it should be universal enough for everyone to use and enjoy,” says Dr. Chia-Chen Kuo, project leader at the NCHC Render Farm. She believes that since technology is also part of Taiwan’s culture, working with leading tech companies like GIGABYTE will go a long way towards improving Taiwan’s cultural exports. By infusing CGI shows with animation techniques that are made possible through high performance computing, the NCHC can help content creators share Taiwan's unique aesthetics and stories with the rest of the world.The rendering capabilities of the G191-H44 and G481-S80 make cutting-edge animation techniques accessible to content creators, while the servers’ patented smart features ensure services are available and reliable. These benefits are in keeping with Dr. Kuo’s original vision, and they make it possible for local companies like Xanthus Animation Studio to enjoy a dramatic entry on the world stage. GIGABYTE’s motto is “Upgrade Your Life”. It is a sincere belief that brilliant new inventions will fundamentally change the way we live, work, and play for the better. The same holds true for Taiwan’s animation industry, which may return to—or even surpass—its glory days with server solutions by GIGABYTE.

상세보기

[GIGABYTE] Using GIGABYTE, NIPA Cloud Soars Among CSP Giants in Thailand

2023.06.15

[GIGABYTE] Using GIGABYTE, NIPA Cloud Soars Among CSP Giants in Thailand

    Nipa Technology Co., stylized as NIPA Cloud, is a leading cloud service provider (CSP) in Thailand. Founded in 1996 by Dr. Abhisak Chulya, NIPA Cloud offers public and private cloud services based on the open-source software platform OpenStack. NIPA Cloud’s vision is to provide world-class service to local businesses at an affordable price. Already, it has had tremendous success: Krung Thai Bank (KTBCS), Thailand’s largest state-owned bank, operates three private cloud clusters built by NIPA Cloud.Glossary:《What is Cloud Computing?》《What is OpenStack?》Thailand’s cloud market is a proverbial gold mine. The research firm Gartner estimated that cloud spending in Thailand surpassed US$ 1 billion in 2022. The research firm IDC estimates that cloud spending makes up just 5% of Thailand’s total IT spending. The seemingly contradictory numbers belie enormous opportunity: a lot of Thai companies are migrating to the cloud for the first time, while many others are on the fence. The billion-dollar question is this: what will it take to convince them?Drawing on their years of experience, NIPA Cloud has identified three indicators that are evaluated by Thai companies when they consider cloud migration:1. Cost-efficiency: Switching to the cloud should provide more resources at a better cost.2. Availability: Services should always be available and reliable; after all, no one wants to see operations grind to a halt because of server issues.3. Security: It must be safe and legal to store sensitive data on the cloud. Tech support should be standing by to provide assistance if there is trouble.Some of the world’s biggest CSP brands, such as Amazon Web Service (AWS), Google Cloud Platform (GCP), and Microsoft Azure, are eyeing the growing Thai market. To maintain their position, NIPA Cloud launched “NIPA Enterprise Public Cloud” in September of 2021. This was a new cloud computing service for enterprise customers. It offered benefits beyond the scope of NIPA Cloud's Public Cloud service, such as multisite infrastructure support, improved security, fabric networking, unlimited bandwidth, and faster data transfer on 5G or IoT networks. The new service was made possible with server solutions from GIGABYTE.Glossary:《What is 5G?》《What is IoT?》     Competing with Titans in the Billion-Dollar CSP Market “Cloud is truly the foundation of digital transformation, and we believe we will be a cloud pioneer in the corporate world,” says Dr. Abhisak.NIPA Enterprise Public Cloud addressed the issue of security by adding a Network Access Control List within the instance, which provided greater control and flexibility. NIPA Cloud also offered 24/7 tech support, which can provide consultation about compliance with the Personal Data Protection Act (PDPA), should the client have questions.However, the issues of cost-efficiency and availability represented more of a hurdle. Somehow, NIPA Cloud must offer more computational resources to their customers while making everything cost less. What’s more, it needed to guarantee the availability of its services. These challenges are common problems faced by the world’s leading CSPs. Now, NIPA Cloud has joined their ranks.The solution was to pair the new cloud service with GIGABYTE servers. Three models from the R-Series of Rack Servers were selected: R282-Z90, a 2U 12-bay server; R182-Z90, a 1U 4-bay server; and R182-Z92, a 1U 10-bay NVMe server. All three models run on dual AMD EPYC™ 7002 series processors. They offer impeccable performance, availability, and power efficiency—just what NIPA Cloud needed to satisfy its customers.Learn More:《Glossary: What is NVMe?》《More information about GIGABYTE's Rack Server》《The EPYC Rise of AMD’s New Server Processor》Using these servers, NIPA Cloud built a new computing cluster and assigned roles to each model. The R282-Z90 is the controller node, used by administrators to manage the entire cluster. The R182-Z90 is the compute node, which processes data. The R182-Z92 is the storage node that stores data. Both the R282-Z90 and R182-Z90 run on AMD EPYC™ 7642 processors, while the R182-Z92 is powered by AMD EPYC™ 7402 processors.《Glossary: What is Computing Cluster?》GIGABYTE servers offer these two primary benefits:1. Incredible performance and faster data transfer speeds, made possible by the AMD EPYC™ processors and other key components.2. Smart management functions that guarantee availability and optimize power efficiency.     Incredible Performance and Faster Data Transfer with AMD Processors AMD EPYC™ processors lie at the heart of the powerful GIGABYTE servers. Launched in 2017, these processors utilize the same x86 architecture as Intel Xeon, but they offer a higher number of cores and threads per CPU. This makes them the natural choice for customers who want to get more computing power out of each processor. 《Glossary: What is Thread?》 GIGABYTE R282-Z90 servers form the controller node in NIPA Cloud’s computing cluster. Administrators use OpenStack controller management software to manage and support the cluster’s compute, storage, and networking nodes.   The R282-Z90 and R182-Z90 run on AMD EPYC™ 7642, which can house up to 48 cores and 96 threads in a single CPU; the maximum clock rate is 3.3GHz. The R182-Z92 runs on AMD EPYC™ 7402, which can house up to 24 cores and 48 threads in a single CPU, with the maximum clock rate being 3.35GHz. Empowered by high-performing processors, NIPA Cloud’s new cluster is able to run all its virtual machines (VM) on dedicated cores, without any of the cores being shared or overcommitted, so that a multitude of tasks can be completed simultaneously—a must for any CSP servicing a large number of customers.《Glossary: What is Virtual Machine?》What’s more, AMD EPYC™ processors support PCIe Gen 4.0, which boasts a maximum bandwidth of 64GB/s that is twice as fast as the previous generation. This doubles the bandwidth available from the CPU to other key components within the server, such as graphics cards, storage devices, or high-speed network cards. The EPYC™ processors feature eight-channel DDR4 memory lanes, which can support RDIMM or LRDIMM memory modules with a speed of up to 3200MHz, even at two DIMMs per channel. This feature is a unique solution developed by GIGABYTE. It gives customers a performance edge with more memory capacity at faster speeds than other products on the market.Glossary:《What is PCIe?》《What is DIMM?》 GIGABYTE R182-Z90 servers handle computing for NIPA Enterprise Public Cloud. Its AMD EPYC™ processors contain a high number of cores and threads, making it possible for NIPA Cloud to offer more computational resources and better performance at a competitive cost.   The upshot of all this is that clients using NIPA Enterprise Public Cloud can enjoy more computational resources than they’ve had before at a better cost, because they can rent more or less capacity depending on the situation, and they don’t have to worry about upkeep. NIPA Cloud also benefits from the AMD processors with a high number of cores in each CPU, since they can fulfill more requests with the same number of servers, which is very cost-efficient.     Smart Management Functions for Availability and Power Efficiency The second issue customers consider when migrating to the cloud is the question of availability. Access to cloud services cannot be interrupted; therefore, servers hosting the cloud must always be online, and they should conserve energy if possible, to protect the environment and help the operator reduce costs. To this end, GIGABYTE offers a variety of smart management functions in its servers:● Smart Crises Management and Protection (SCMP)Patented by GIGABYTE, SCMP is a feature included in servers without a fully redundant power supply unit (PSU) design. With SCMP, in the event of a faulty PSU or overheating, the system will force the CPU to go into an ultra-low power mode to reduce the power load, which protects the system from unexpected shutdowns and avoids component damage or data loss.● Smart Ride Through (SmaRT)To prevent downtime or data loss that may result from a power outage, GIGABYTE implemented SmaRT in all its servers. In the unfortunate event of a blackout, the system will manage its power consumption (known as throttling) while maintaining availability and reducing the power load. Capacitors within the PSU can provide power for ten to twenty milliseconds, which is ample time to transition to a backup power source, ensuring that services would not be disrupted.● Automatic Fan Speed ControlGIGABYTE servers are enabled with Automatic Fan Speed Control to achieve optimal cooling and power efficiency. Individual fan speeds will be automatically adjusted according to temperature sensors strategically placed inside the server.● Cold RedundancyTo take advantage of the fact that a PSU will run at greater power efficiency with a higher load, GIGABYTE has introduced a power management feature called Cold Redundancy for servers with N+1 redundancy. When the total system load falls lower than 40%, the system will automatically place one PSU in standby mode, resulting in a 10% improvement in power efficiency. GIGABYTE R182-Z92 servers make up the storage node of NIPA Enterprise Public Cloud. As with the majority of GIGABYTE servers, it is protected by a host of smart management functions, such as automatic fan speed control, Cold Redundancy, GSM, SCMP, and SmaRT, to prevent service interruption and data loss.   ● GIGABYTE Server Management (GSM)Other providers may charge for management software; however, GIGABYTE Server Management (GSM), a proprietary remote management console (RMC) for multiple servers, is free to download on the official GIGABYTE website. Fully compatible with either IPMI or Redfish (RESTful API) connection interfaces, GSM is comprised of various sub-programs that enable easy management of the entire cluster. NIPA Cloud administrators can monitor and manage the new cluster remotely, ensuring the availability of cloud services.     Building a Smarter, Better Tomorrow with Cloud Migration A member of the team knowledgeable about the selection process says that NIPA Cloud considered a number of brands, but ultimately chose GIGABYTE, “Because GIGABYTE is known for its good quality and competitive price, and its server products help NIPA Cloud provide better cloud computing services for customers.”Dr. Abhisak says, “NIPA Cloud is unique as a local brand because we can compete with global CSPs like AWS, GCP, and Microsoft Azure, not to mention emerging Chinese providers.” Buoyed by the new NIPA Enterprise Public Cloud and supported by GIGABYTE’s industry-leading server solutions, NIPA Cloud is confident of securing its foothold in the burgeoning Thai market. It also has plans to expand to other Southeast Asian countries, including Myanmar, Malaysia, and Singapore.GIGABYTE is happy to bolster NIPA Cloud’s ongoing effort to offer world-class cloud computing services to customers in the region. GIGABYTE’s motto is “Upgrade Your Life”. It is a tireless commitment to the proliferation of cutting-edge technology in our daily lives, so we can build a smarter and better tomorrow. Cloud migration will certainly advance this vision, as enterprises can create better value with ample computational resources, and workers can benefit from more innovative tools and services.

상세보기

[GIGABYTE] GIGABYTE 5G and Edge Solutions: The Servers of Choice for NVIDIA Aerial SDK, Taipei Music

2023.03.14

[GIGABYTE] GIGABYTE 5G and Edge Solutions: The Servers of Choice for NVIDIA Aerial SDK, Taipei Music

        What are the Attributes of an Ideal Edge Server in the 5G Era? At its core, an edge server is a solution designed to combine edge computing network architecture with 5th Generation mobile cellular networks, commonly known as 5G. 5G provides greater bandwidth and higher speeds of up to 10 Gbit/s—that is ten times faster than current 4G networks. Edge computing is a way to position computation as close to the end user as possible, further reducing latency and bandwidth consumption. The upshot is that mobile devices can now connect to data centers just as easily as if there were a physical cable between them.The implications are groundbreaking. Ultra-high-definition video streaming and cloud gaming are just the tip of the iceberg. Radical innovations such as autonomous vehicles may soon become a fact of life, all thanks to 5G communications and edge computing solutions.Learn More:《What is Data Center?》《How to Build Your Data Center with GIGABYTE? A Free Tech Guide》《Israeli Autonomous Vehicle Developer Builds Self-Driving Cars with GIGABYTE》

상세보기

[GIGABYTE]6 Key Knowledge to Build the Power of Computing for Your Business

2023.01.31

[GIGABYTE]6 Key Knowledge to Build the Power of Computing for Your Business

https://www.gigabyte.com/Article/6-key-knowledge-to-build-the-power-of-computing-for-your-business

상세보기

[GIGABYTE]3 Easy Steps to Choose the Right Server Cooling Solution

2023.01.31

[GIGABYTE]3 Easy Steps to Choose the Right Server Cooling Solution

https://www.gigabyte.com/Article/how-to-pick-a-system-cooling-solution-in-3-easy-steps 

상세보기

[GIGABYTE]The Best Solution for Pursuing Server Performance and Business Sustainable Development

2023.01.31

[GIGABYTE]The Best Solution for Pursuing Server Performance and Business Sustainable Development

https://www.gigabyte.com/Article/the-best-data-center-solution-for-competitive-performance-and-sustainable-development

상세보기

[GIGABYTE]Data Center Cooling: The Key to Green Computing and a Low-Carbon Transition

2023.01.31

[GIGABYTE]Data Center Cooling: The Key to Green Computing and a Low-Carbon Transition

https://www.gigabyte.com/Article/data-center-cooling-the-key-to-green-computing-and-a-low-carbon-transition

상세보기

[GIGABYTE]Silicon Valley Startup Sushi Cloud Rolls Out Bare-metal Services with GIGABYTE

2023.01.31

[GIGABYTE]Silicon Valley Startup Sushi Cloud Rolls Out Bare-metal Services with GIGABYTE

상세보기

'스마트공장부터 서비스 산업까지' 높아지는 협동로봇 활용도

2022.03.15

'스마트공장부터 서비스 산업까지' 높아지는 협동로봇 활용도

IoT를 지원하는 협동로봇과 사람이 함께 일하는 산업현장이 증가하고 있는 추세다. 협동로봇은 현장 생상 라인의 작업을 방해받지 않는 선에서 유해한 작업을 실무자 대신하거나 단순 반복 작업 등을 함께 협력해  원하는 작업을 안전하게 실행하도록 제작된 로봇으로, 인력만으로는 한계가 있는 열악한 근무환경을 개선하기 위해 도입됐다.  협동로봇은 실무자와 함께 작업하면서 물리적으로 상호작용하도록 설계됐다.  일반 산업용로봇과는 달리 사람과 함께 산업 공정에서부터 서비스업까지 다양한 작업을 수행하며 종류와 기능이 다양하다. 현장 실무자는 어렵지 않게 로봇을 컨트롤 해 위험한 작업을 좀 더 안전하고 효율적으로 진행할 수 있고 단순 반복 작업과 같은 일 등 여러 작업 환경에 적용 가능하다.   ---------------------이하생략-------------------------       '스마트공장부터 서비스 산업까지' 높아지는 협동로봇 활용도 (hellot.net)  

상세보기

제조 분야를 겨냥한 AK솔루션즈 맞춤형 AI 솔루션 알아보기

2021.03.23

제조 분야를 겨냥한 AK솔루션즈 맞춤형 AI 솔루션 알아보기

    제조업은 코로나19로 인해 적신호가 들어왔다. 그럼에도 불구하고 이 시기를 제조업의 새로운 전환점으로 보는 이들이 있다. 생산과 수요가 줄어들면서 제조업 전망이 불투명했으나 이를 발판으로 자동화 설비 및 각종 공정을 지능화하려는 움직임이 제조업의 미래를 한층 더 앞당길 것으로 보인다.  제조업에 활력을 불어넣는 AI  스마트공장 AI는 정체돼 있던 제조업을 디지털 경제에 맞춰 혁신적인 방안으로 이끌었다. 이에 컴퓨터와 가전기기, 스마트팩토리, IT 등의 비대면 산업분야가 성황을 이뤘다.   한 예로, AI를 비롯한 신기술은 자율주행을 비롯해 음성인식과 AI 기반 설비 진단, 지능형 보안, 의료시스템 등 다양한 분야에서 활발하게 활용 중이다.    ---------------------이하생략-------------------------     제조 분야를 겨냥한 AK솔루션즈 맞춤형 AI 솔루션 알아보기 (hellot.net)  

상세보기

AK정보통신, ‘차세대 지능형 교통시스템’ 구축에 필요한 대표 솔루션 소개

2020.08.21

AK정보통신, ‘차세대 지능형 교통시스템’ 구축에 필요한 대표 솔루션 소개

  스마트 도시화 과정에서 교통망 흐름을 개선하는 기본적인 방안은 ‘지능형 교통시스템(ITS)’이다. 지능형교통시스템은 불법차량 행위를 감지할 수 있는 모니터링이 가능하며 차량의 흐름을 읽을 수 있어 각종 교통 문제 데이터를 수집할 수 있다. 교통사고가 발생하면 사고 발생 장소에서 가장 가까운 교통 경찰관에 알려 신속한 조치를 할 수 있게 하고, 교차로의 모든 방향의 실시간 차량 평균 주행속도를 모니터링 해 차량 흐름도 읽을 수 있다. ----------------- 이하생략 ----------------- http://www.hellot.net/new_hellot/magazine/magazine_read.html?code=202&sub=004&idx=53900

상세보기

[Innodisk] Innodisk Pushing the Envelope with Industrial-grade 32GB DRAM

2019.10.29

[Innodisk] Innodisk Pushing the Envelope with Industrial-grade 32GB DRAM

    Innodisk’s new 32GB industrial-grade DRAM series aims at expanding capacity for the IoT and 5G fields     Anticipating the exponential demand for data capacity is what has spurred Innodisk to launch its newest DRAM series.  With the up-and-coming AIoT and 5G markets, sufficient large-scale memory capacity is no longer a pure data center concern.            ▶ Powering New Technology Running AI at the edge and powering the high requirements of 5G puts a need for high data capacity and  simultaneous data analytics at every link in the system infrastructure. The 32GB DRAM series solves this by effectively doubling the capacity compared to previous modules. Furthermore, at 2666MHz/s, the transfer speed is more than enough to meet high-capacity networking requirements.    ▶ Medical Logging and Wide Temp Rugged Servers The rugged servers onboard unmanned vehicles have to handle large and continues data input, which in turn requires high capacity. Picture a driverless bus: it runs year-around in an urban environment with varying weather, being susceptible to air pollution and extreme temperatures. This is why the 32GB series can be further enhanced with Anti-Sulfuration measures, wide temperature specs, and heat spreader to tackle the tough conditions in the field.     ▶ Extended Compatibility The modules are available in several form factors, including UDIMM, SODIMM, ECC-SODIMM, and wide temperature modules. Additionally, the new 32GB DRAM series is also compatible with the newest AMD X570 and Intel Z390/Z370 platforms, and the AMD Ryzen 3000 Gen.3 and Intel Coffee Lake CPUs.      

상세보기

[Innodisk] Innodisk’s New DRAM and SSD Innovations to Take Center Stage at FMS 2019

2019.10.29

[Innodisk] Innodisk’s New DRAM and SSD Innovations to Take Center Stage at FMS 2019

상세보기

[Innodisk] Innodisk’s InnoAGE™ SSD connected by Microsoft Azure Sphere

2019.10.29

[Innodisk] Innodisk’s InnoAGE™ SSD connected by Microsoft Azure Sphere

상세보기