An AMD-based LLM developer has mocked the scarcity of NVIDIA's AI GPUs in a pretty interesting short video. NVIDIA's AI GPU shortages are becoming a major issue for the AI sector, including AI startups that are starting to switch to other camps to meet their demand.
Lamini's CEO Puts Up Interesting Short Video, Accurately Representing NVIDIA AI GPU Shortages & Cooking Up AMD Instinct GPUs
Lamini is an LLM-focused company for enterprises and developers that focuses on AI development. Their CEO Sharon Zhou came on X to express the frustration of various AI startups including themselves over the lack of availability of NVIDIA AI GPUs, through a short and funny video.
Just grilling up some GPUs
Kudos to Jensen for baking them first https://t.co/4448NNf2JP pic.twitter.com/IV4UqIS7OR
— Sharon Zhou (@realSharonZhou) September 26, 2023
Sharon walks into a kitchen (a pretty iconic setting for NVIDIA's CEO himself) in search of an LLM AI accelerator and opens an oven to see what's "cooking." However, the lead time in the oven was up to 52 weeks, but she then checked her grill and saw AMD's Instinct accelerators ready to be used.
Sharon's representation of the current situation is accurate since NVIDIA's AI GPUs, especially the H100s, are currently facing huge order backlogs, with delivery dates exceeding the six-month mark. AMD has an edge here since the volume of Instinct AI accelerator orders the company currently has is nothing compared to what NVIDIA is fulfilling. Lamini has been running AMD's Instinct GPUs in LLMs for a while now, and the company is committed to extensive cooperation with AMD.
What’s more, with Lamini, you can stop worrying about the 52-week lead time for NVIDIA H100s. Using Lamini exclusively, you can build your own enterprise LLMs and ship them into production on AMD Instinct GPUs. And shhhh… Lamini has been secretly running on over one hundred AMD GPUs in production all year, even before ChatGPT launched. So, if you’ve tried Lamini, then you’ve tried AMD.
Now, we’re excited to open up LLM-ready GPUs to more folks. Our LLM Superstation is available both in the cloud and on-premise. It combines Lamini's easy-to-use enterprise LLM infrastructure with AMD Instinct MI210 and MI250 accelerators. It is optimized for private enterprise LLMs, built to be heavily differentiated with proprietary data.
Lamini is the only LLM platform that exclusively runs on AMD Instinct GPUs — in production. Ship your own proprietary LLMs! Just place an LLM Superstation order to run your own Llama 2-70B out of the box—available now and with an attractive price tag (10x less than AWS).
via Lamini
The industry has yet to see a large-scale adoption of AMD's Instinct GPUs, and they haven't reached the "popularity" level of NVIDIA's H100s for now. Speaking of performance, we all know that the H100s are in the lead; however, AMD is closing the gap here through rapid improvements within its ROCm software stack, which aids in overall higher computing performance. Team Red has planned the Instinct MI400 and MI300X as future releases, but they haven't yet hit the market; hence, we can't comment on it.
The only edge AMD can capitalize on is the company's ability to cater to large orders since NVIDIA is on the back foot in this department. With wait times of a year for NVIDIA H100s, LLM startups and companies will eventually eye AMD and Intel, which is why every tech company is accelerating its AI developments tremendously as they move in 2024.
WccftechContinue reading/original-link]