OpenMMLabAnnouncing XTuner: An Efficient Finetune Toolkit for LLMWith XTuner, 8GB GPU memory is all you need to create your own AI assistant.3 min read·Sep 1, 2023----
OpenMMLabThe AI New Era: How Should Large Models “Rack Their Brains”How do we address the shortcomings of large language models in complex reasoning? By using the chain-of-thought prompting technique!9 min read·Aug 24, 2023----
OpenMMLabBenchmarking the multi-modal capability of Bard with MMBenchIn order to provide an overview of Bard’s multi-modal ability, we evaluate it on the test split of MMBench as below and compare it with…4 min read·Aug 18, 2023----
OpenMMLabFaster and More Efficient 4-bit quantized LLM Model InferenceLMDeploy has released an exciting new feature — 4-bit quantization and inference. This not only trims down the model’s memory overhead to…4 min read·Aug 16, 2023--1--1
OpenMMLabThoroughly evaluate AX620A from the perspective of security industryFrom the perspective of security business, we have conducted a comprehensive test of AX620A, covering various evaluation indicators such as8 min read·Aug 10, 2023----
OpenMMLabDeploy Llama-2 models easily with LMDeploy!This article will guide you on how to quickly deploy the Llama-2 models with LMDeploy.4 min read·Aug 2, 2023----
OpenMMLabIt’s 2023. Is PyTorch’s FSDP the best choice for training large models?Why codebase for training large models tends to use frameworks like DeepSpeed or ColossalAI ,with scant regard for PyTorch’s native FSDP…14 min read·Aug 1, 2023--2--2
OpenMMLabFine-tuning Llama2 takes less than 200 lines of code!This article will guide you on how to fine-tuning Llama2 takes less than 200 lines of code!3 min read·Jul 27, 2023----
OpenMMLabJoin OpenMMLab Codecamp: Harness Your Coding Skills and Shape the Future of Open Source!Want to improve programming skills?2 min read·Jul 24, 2023----
OpenMMLabMeta Open-Sources LLAMA-2, Overview of New FeaturesLet’s take a quick look at the exciting new features of the upgraded LLaMA-2.4 min read·Jul 20, 2023----