SW/HW Co-optimization Strategy for Large Language Models (LLMs) | by Liz Li | Dec, 2023

Editor
2 Min Read


How to stretch every bit out of your system to run LLMs faster? — best practice

Leading Large Language Models (LLMs) like ChatGPT, Llama, etc. are revolutionizing the tech industry and impacting everyone’s lives. However, their cost poses a significant hurdle. Applications utilizing OpenAI APIs incur substantial expenses for continuous operation ($0.03 per 1,000 prompt tokens and $0.06 per 1,000 sampled tokens).

To cut costs, companies tend to host their own LLMs, with expenses varying widely based on model size (larger LLMs with 100–200B parameters can cost ~10 times more compared to smaller ones with 7–15B parameters). This trend has spurred the AI chip race, as major tech companies aim to develop their own AI chips, reducing reliance on expensive hardware.

Trend of model size. Source: AWS reInvent

How to squeeze every bit of computing power to run LLMs? In this article, I am going to do a thorough analysis of LLM optimization strategy across models, software, and hardware. It follows the AI SW/HW co-design methodology I wrote in previous article, with much more in-depth discussion on LLM-specific cost and performance optimization.

Source: made by author and other colleagues

The compute and memory demands of running LLM models are growing exponentially, while computing/memory capabilities are lagging behind on a slower trajectory, as depicted in the image above. To bridge this performance gap, it’s crucial to explore enhancements in three key areas:

  1. Algorithmic Improvement and Model Compression: How can we augment models with features to reduce compute and memory demands without compromising quality? What are the latest advancements in LLM quantization technology that reduce model size while maintaining quality?
  2. Efficient SW Stack and Acceleration Libraries: What considerations are vital in…
Share this Article
Please enter CoinGecko Free Api Key to get this plugin works.