LLaMA-65B

Inference

LLaMA is the most popular foudation model in the field of large model research and is open sourced by Meta. The popular Vicuna, Koala and other models are all trained on the basis of LLaMA. The LLaMA model with 65 billion parameters now supports inference on the STC AC700.

LLaMA-7B
Inference
LLaMA is the most popular foudation model in the field of large model research and is open sourced by Meta. The popular Vicuna, Koala and other models are all trained on the basis of LLaMA. The LLaMA model with 7 billion parameters now supports inference on the STC AC700.
Baichuan-7B
Inference
Baichuan-7B is an open-source, commercially available large-scale pretrained language model developed by Baichuan Intelligence. Based on the Transformer structure, the 7-billion-parameter model trained on approximately 1.2 trillion tokens supports both Chinese and English languages, and the context window length is 4096. The 7-billion-parameter Baichuan model now supports inference on the STC AC700.
Baichuan-13B
Inference
Baichuan-13B is an open-source, commercially available large-scale language model with 13 billion parameters developed by Baichuan Intelligence following Baichuan-7B. It has achieved the best results of the same size on authoritative Chinese and English benchmarks. The Baichuan model with 13 billion parameters now supports inference on STC AC700.
LLM
MultimodalitY
CV
voice
NLP
LLM
LLaMA-65B
Inference
LLaMA is the most popular foundation model in the field of large model research and is open sourced by Meta. The popular Vicuna, Koala and other models are all trained on the basis of LLaMA. The LLaMA model with 65 billion parameters now supports inference on the STC AC700.
LLaMA-7B
Inference
LLaMA is the most popular foundation model in the field of large model research and is open sourced by Meta. The popular Vicuna, Koala and other models are all trained on the basis of LLaMA. The LLaMA model with 7 billion parameters now supports inference on the STC AC700.
Baichuan-7B
Inference
Baichuan-7B is an open-source, commercially available large-scale pre-trained language model developed by Baichuan Intelligence. Based on the Transformer structure, the 7-billion-parameter model trained on approximately 1.2 trillion tokens supports both Chinese and English languages, and the context window length is 4096. The 7-billion-parameter Baichuan model now supports inference on the STC AC700.
Baichuan-13B
Inference
Baichuan-13B is an open-source, commercially available large-scale language model with 13 billion parameters developed by Baichuan Intelligence following Baichuan-7B. It has achieved the best results of the same size on authoritative Chinese and English benchmarks. The Baichuan model with 13 billion parameters now supports inference on the STC AC700.
BLOOM-7B
Inference
BLOOM is an autoregressive large language model (LLM) that is trained to continue text based on prompts from large amounts of text data using industrial-scale computing resources. As a result, it is able to output coherent text in 46 languages ​​and 13 programming languages. The 7 billion parameter Bloom model now supports inference on the STC AC700.
GPT-2-13B
Inference
GPT-2 (Generative Pre-trained Transformer 2) is a large language model created by OpenAI in February 2019. GPT-2 can translate text, answer questions, summarize paragraphs, and generate text output. The GPT2 model with 13 billion parameters now supports inference on the Terabyte cards.
GPT-2-13B
Inference
GPT-2 (Generative Pre-trained Transformer 2) is a large language model created by OpenAI in February 2019. GPT-2 can translate text, answer questions, summarize paragraphs, and generate text output. The GPT2 model with 13 billion parameters now supports training and use on the TeraChip.
GPT-2-13B
Inference
GPT-2 (Generative Pre-trained Transformer 2) is a large language model created by OpenAI in February 2019. GPT-2 can translate text, answer questions, summarize paragraphs, and generate text output. The GPT2 model with 13 billion parameters now supports fine-tuning on the TeraChip.
GPT-NeoX-20B
Inference
GPT-NeoX is an autoregressive large language model trained by EleutherAI, which is widely used in academia, industry, and government laboratories. The GPT2-NeoX model with 20 billion parameters now supports inference on the STC AC700.
CV
Resnet50-v1.5
Pretrain | PyTorch
ResNet50 v1.5 (Residual Network) is a specific type of convolutional neural network (CNN) commonly used in computer vision. ResNet50 v1.5 now supports training on the STC AC700.(based on the PyTorch framework).
Resnet50-v1.5
Inference
ResNet50 v1.5 (Residual Network) is a specific type of convolutional neural network (CNN) commonly used in computer vision. ResNet50 v1.5 is now supported for inference on the STC AC700.
Resnet50-v1.5
Pretrain | PaddlePaddle
ResNet50 v1.5 (Residual Network) is a specific type of convolutional neural network (CNN) commonly used in computer vision. ResNet50 v1.5 now supports training on the STC AC700 (based on the PaddlePaddle framework).
Yolov5-m
Inference
Yolo V5 builds on the success of previous YOLO versions and introduces new features and improvements. It is fast, accurate and easy to use. Yolo V5 has become an excellent choice for various object detection, instance segmentation and image classification tasks. Now Yolov5m (21.2M parameters) is supported for inference on STC AC700.
Yolov5-l6
Pretrain | PyTorch
Yolo V5 builds on the success of previous YOLO versions and introduces new features and improvements. It is fast, accurate and easy to use. Yolo V5 has become an excellent choice for various object detection, instance segmentation and image classification tasks. Now Yolov5l6 (76.8M parameters) can be used in the STC AC700 training (based on the PyTorch framework).
Yolov5-l6
Inference
Yolo V5 builds on the success of previous YOLO versions and introduces new features and improvements. It is fast, accurate and easy to use. Yolo V5 has become an excellent choice for various object detection, instance segmentation and image classification tasks. Now Yolov5l6 (76.8M parameters) is supported for inference on STC AC700.
Voice
Wav2Vec 2.0-base
Pretrain | PyTorch
Wav2vec uses self-supervised training to obtain speech recognition capabilities from unlabeled training data, and can support multiple language recognition. Wav2Vec-2.0-base now supports training on the STC AC700.(based on the PyTorch framework).
Wav2Vec 2.0-base
Inference
Wav2vec uses self-supervised training to obtain speech recognition capabilities from unlabeled training data, and can support multiple language recognition. Wav2Vec-2.0-base now supports reasoning on STC AC700.
NLP
BERT-Base
Pretrain | PyTorch
BERT (Bidirectional Encoder Representation from Transformers) is a pre-training model proposed by Google AI Research in October 2018. BERT-Base now supports training on the STC AC700. (based on the PyTorch framework).
BERT-Base
Inference
BERT (Bidirectional Encoder Representation from Transformers) is a pre-training model proposed by Google AI Research in October 2018. BERT-Base now supports inference on the TeraChip card.
LLM
LLaMA-65B
Inference
LLaMA is the most popular foundation model in the field of large model research and is open sourced by Meta. The popular Vicuna, Koala and other models are all trained on the basis of LLaMA. The LLaMA model with 65 billion parameters now supports inference on the STC AC700.
LLaMA-7B
Inference
LLaMA is the most popular foundation model in the field of large model research and is open sourced by Meta. The popular Vicuna, Koala and other models are all trained on the basis of LLaMA. The LLaMA model with 7 billion parameters now supports inference on the STC AC700.
Baichuan-7B
Inference
Baichuan-7B is an open-source, commercially available large-scale pre-trained language model developed by Baichuan Intelligence. Based on the Transformer structure, the 7-billion-parameter model trained on approximately 1.2 trillion tokens supports both Chinese and English languages, and the context window length is 4096. The 7-billion-parameter Baichuan model now supports inference on the STC AC700.
Baichuan-13B
Inference
Baichuan-13B is an open-source, commercially available large-scale language model with 13 billion parameters developed by Baichuan Intelligence following Baichuan-7B. It has achieved the best results of the same size on authoritative Chinese and English benchmarks. The Baichuan model with 13 billion parameters now supports inference on STC AC700.
BLOOM-7B
Inference
BLOOM is an autoregressive large language model (LLM) that is trained to continue text based on prompts from large amounts of text data using industrial-scale computing resources. As a result, it is able to output coherent text in 46 languages ​​and 13 programming languages. The 7 billion parameter Bloom model now supports inference on the STC AC700.
GPT-2-13B
Inference
GPT-2 (Generative Pre-trained Transformer 2) is a large language model created by OpenAI in February 2019. GPT-2 can translate text, answer questions, summarize paragraphs, and generate text output. The GPT2 model with 13 billion parameters now supports inference on the Terabyte cards.
GPT-2-13B
Inference
GPT-2 (Generative Pre-trained Transformer 2) is a large language model created by OpenAI in February 2019. GPT-2 can translate text, answer questions, summarize paragraphs, and generate text output. The GPT2 model with 13 billion parameters now supports training and use on the TeraChip.
GPT-2-13B
Inference
GPT-2 (Generative Pre-trained Transformer 2) is a large language model created by OpenAI in February 2019. GPT-2 can translate text, answer questions, summarize paragraphs, and generate text output. The GPT2 model with 13 billion parameters now supports fine-tuning on the TeraChip.
GPT-NeoX-20B
Inference
GPT-NeoX is an autoregressive large language model trained by EleutherAI, which is widely used in academia, industry, and government laboratories. The GPT2-NeoX model with 20 billion parameters now supports inference on the STC AC700.
CV
Resnet50-v1.5
Pretrain | PyTorch
ResNet50 v1.5 (Residual Network) is a specific type of convolutional neural network (CNN) commonly used in computer vision. ResNet50 v1.5 now supports training on the STC AC700. (based on the PyTorch framework).
Resnet50-v1.5
Inference
ResNet50 v1.5 (Residual Network) is a specific type of convolutional neural network (CNN) commonly used in computer vision. ResNet50 v1.5 is now supported for inference on the STC AC700.
Resnet50-v1.5
Pretrain | PaddlePaddle
ResNet50 v1.5 (Residual Network) is a specific type of convolutional neural network (CNN) commonly used in computer vision. ResNet50 v1.5 now supports training on the STC AC700. (based on the PaddlePaddle framework).
Yolov5-m
Inference
Yolo V5 builds on the success of previous YOLO versions and introduces new features and improvements. It is fast, accurate and easy to use. Yolo V5 has become an excellent choice for various object detection, instance segmentation and image classification tasks. Now Yolov5m (21.2M parameters) is supported for inference on STC AC700.
Yolov5-l6
Pretrain | PyTorch
Yolo V5 builds on the success of previous YOLO versions and introduces new features and improvements. It is fast, accurate and easy to use. Yolo V5 has become an excellent choice for various object detection, instance segmentation and image classification tasks. Now Yolov5l6 (76.8M parameters) can be used in the STC AC700. training (based on the PyTorch framework).
Yolov5-l6
Inference
Yolo V5 builds on the success of previous YOLO versions and introduces new features and improvements. It is fast, accurate and easy to use. Yolo V5 has become an excellent choice for various object detection, instance segmentation and image classification tasks. Now Yolov5l6 (76.8M parameters) is supported for inference on STC AC700.
Voice
Wav2Vec 2.0-base
Pretrain | PyTorch
Wav2vec uses self-supervised training to obtain speech recognition capabilities from unlabeled training data, and can support multiple language recognition. Wav2Vec-2.0-base now supports training on STC AC700 (based on the PyTorch framework).
Wav2Vec 2.0-base
Inference
Wav2vec uses self-supervised training to obtain speech recognition capabilities from unlabeled training data, and can support multiple language recognition. Wav2Vec-2.0-base now supports reasoning on STC AC700.
NLP
BERT-Base
Pretrain | PyTorch
BERT (Bidirectional Encoder Representation from Transformers) is a pre-training model proposed by Google AI Research in October 2018. BERT-Base now supports training on the STC AC700. (based on the PyTorch framework).
BERT-Base
Inference
BERT (Bidirectional Encoder Representation from Transformers) is a pre-training model proposed by Google AI Research in October 2018. BERT-Base now supports inference on the TeraChip card.
ООО ЦСТ Официальный представитель в России Sunway TaihuLight
ООО ЦСТ Москва Пресненская Набережная ММДЦ Москва-Сити Д8, Стр.1,
ИНН: 5044124034
ОГРН: 1225000001445
Copyright © ООО ЦСТ 2024. Все права защищены