OpenMMLabAnnouncing XTuner: An Efficient Finetune Toolkit for LLMWith XTuner, 8GB GPU memory is all you need to create your own AI assistant.Sep 1, 2023Sep 1, 2023
OpenMMLabThe AI New Era: How Should Large Models “Rack Their Brains”How do we address the shortcomings of large language models in complex reasoning? By using the chain-of-thought prompting technique!Aug 24, 2023Aug 24, 2023
OpenMMLabBenchmarking the multi-modal capability of Bard with MMBenchIn order to provide an overview of Bard’s multi-modal ability, we evaluate it on the test split of MMBench as below and compare it with…Aug 18, 2023Aug 18, 2023
OpenMMLabFaster and More Efficient 4-bit quantized LLM Model InferenceLMDeploy has released an exciting new feature — 4-bit quantization and inference. This not only trims down the model’s memory overhead to…Aug 16, 20231Aug 16, 20231
OpenMMLabThoroughly evaluate AX620A from the perspective of security industryFrom the perspective of security business, we have conducted a comprehensive test of AX620A, covering various evaluation indicators such asAug 10, 2023Aug 10, 2023
OpenMMLabDeploy Llama-2 models easily with LMDeploy!This article will guide you on how to quickly deploy the Llama-2 models with LMDeploy.Aug 2, 2023Aug 2, 2023
OpenMMLabIt’s 2023. Is PyTorch’s FSDP the best choice for training large models?Why codebase for training large models tends to use frameworks like DeepSpeed or ColossalAI ,with scant regard for PyTorch’s native FSDP…Aug 1, 20232Aug 1, 20232
OpenMMLabFine-tuning Llama2 takes less than 200 lines of code!This article will guide you on how to fine-tuning Llama2 takes less than 200 lines of code!Jul 27, 2023Jul 27, 2023
OpenMMLabJoin OpenMMLab Codecamp: Harness Your Coding Skills and Shape the Future of Open Source!Want to improve programming skills?Jul 24, 2023Jul 24, 2023
OpenMMLabMeta Open-Sources LLAMA-2, Overview of New FeaturesLet’s take a quick look at the exciting new features of the upgraded LLaMA-2.Jul 20, 2023Jul 20, 2023