InternLM3 Open Source: Achieving High-Performance Models with 4T DataWritten by InternLM TeamJan 16Jan 16
Announcing XTuner: An Efficient Finetune Toolkit for LLMWith XTuner, 8GB GPU memory is all you need to create your own AI assistant.Sep 1, 2023Sep 1, 2023
The AI New Era: How Should Large Models “Rack Their Brains”How do we address the shortcomings of large language models in complex reasoning? By using the chain-of-thought prompting technique!Aug 24, 2023Aug 24, 2023
Benchmarking the multi-modal capability of Bard with MMBenchIn order to provide an overview of Bard’s multi-modal ability, we evaluate it on the test split of MMBench as below and compare it with…Aug 18, 2023Aug 18, 2023
Faster and More Efficient 4-bit quantized LLM Model InferenceLMDeploy has released an exciting new feature — 4-bit quantization and inference. This not only trims down the model’s memory overhead to…Aug 16, 20231Aug 16, 20231
Thoroughly evaluate AX620A from the perspective of security industryFrom the perspective of security business, we have conducted a comprehensive test of AX620A, covering various evaluation indicators such asAug 10, 2023Aug 10, 2023
Deploy Llama-2 models easily with LMDeploy!This article will guide you on how to quickly deploy the Llama-2 models with LMDeploy.Aug 2, 2023Aug 2, 2023
It’s 2023. Is PyTorch’s FSDP the best choice for training large models?Why codebase for training large models tends to use frameworks like DeepSpeed or ColossalAI ,with scant regard for PyTorch’s native FSDP…Aug 1, 20232Aug 1, 20232
Fine-tuning Llama2 takes less than 200 lines of code!This article will guide you on how to fine-tuning Llama2 takes less than 200 lines of code!Jul 27, 2023Jul 27, 2023
Join OpenMMLab Codecamp: Harness Your Coding Skills and Shape the Future of Open Source!Want to improve programming skills?Jul 24, 2023Jul 24, 2023
Meta Open-Sources LLAMA-2, Overview of New FeaturesLet’s take a quick look at the exciting new features of the upgraded LLaMA-2.Jul 20, 2023Jul 20, 2023
Highlights from OpenMMLab’s Boosting Computer Vision Research Tutorial at CVPR 2023The OpenMMLab tutorial at CVPR 2023 has come to a perfect close, and we would like to extend our gratitude to all the attendees who joined…Jun 26, 2023Jun 26, 2023
OpenMMLab CVPR 2023 Tutorial Coming Soon🕖9:00–12:00 AM Sunday, June 18th, 2023Jun 16, 2023Jun 16, 2023
Join the OpenMMLab Team!Please send your resume via email to:jilu@pjlab.org.cn.May 5, 2023May 5, 2023
More than Editing, Unlock the Magic!Since its inception, MMEditing has been the preferred algorithm library for many image super-resolution, editing, and generation tasks…Apr 26, 2023Apr 26, 2023
[CVPR2023]Aligning Bag of Regions for Open-Vocabulary Object DetectionOpen-vocabulary object detection (OVD) aims to detect objects of categories that are not annotated in the training process.Apr 11, 2023Apr 11, 2023
Model deployment(I):Brief introduction of model deploymentMany members of the OpenMMLab community face confusion regarding the deployment of OpenMMLab models. To address this issue, an open source…Apr 10, 2023Apr 10, 2023
Awesome Datasets for Super-Resolution: Introduction and Pre-processingIn the research of image/video super-resolution, a comprehensive understanding of the datasets is crucial. As a toolbox for low-level…Mar 31, 2023Mar 31, 2023
CVPR2023 | Phase-Shifting Coder: Predicting Accurate Orientation in Oriented Object DetectionA new method PSC for the boundary problem in oriented object detectionMar 24, 2023Mar 24, 2023