Models
Overview
Our suite of AI models boasts an impressive array of capabilities, covering a wide range of language processing tasks. These models are sourced from leading AI research labs such as AI21 Labs, Anthropic, Cohere, OpenAI, and Replicate.
-
The
Pulze
series, our very own home-grown suite of AI models, are tailored to align with our clients’ specific needs. ThePulze-v0
particularly stands out for its capacity to identify and utilize the most suitable pre-existing model for any given prompt. These models are renowned for their rapid processing speed and hold a special place among our enterprise clientele. -
AI21 Labs
models cater to diverse needs. From the robustJ2-Ultra
, ideal for the most complex language tasks, to the efficientJ2-Light
designed for simpler, quicker responses. -
AlephAlpha
offers theLuminous
model series. TheLuminous-Supreme
shines in creative text writing, theLuminous-Supreme-Control
is optimized for tasks like information extraction and language simplification, theLuminous-Base-Control
for tasks like classification and labeling, and theLuminous-Extended-Control
further enhances this capability. -
Anthropic
’sClaude
series shines in delivering sophisticated dialogue and creative content generation. -
Cohere
’sCommand
models are optimized for instruction-following conversational tasks. -
OpenAI
offers a line of GPT models, each fine-tuned for specific needs – theGPT-4
with its remarkable chat optimization, and theGPT-3.5 Turbo
providing cost-effective solutions. TheText-Davinci
andText-Curie
series also excel in comprehensive language tasks and are the best models under certain conditions. -
Replicate
makes it easy to run machine learning models in the cloud from your own code. We currently support some of their open-source models. -
HuggingFace
is the platform where the machine learning community collaborates on models, datasets, and applications. -
MosaicML
offers two standout models: TheMPT-7B-Instruct
with 6.7B parameters, trained on Databricks Dolly-15k and Anthropic datasets; and the largerMPT-30B-Instruct
, boasting 30B parameters, expanded with datasets from CompetitionMath to Spider, both delivering a swift 5 RPS. Meanwhile, Meta introduces theLlama2-70B-Chat
, a 70B parameter chat-optimized model with a 4096-token context, licensed under the LLAMA 2 Community License. -
Finally,
GooseAI
’sGPT-J
andGPT-Neo
series serve as powerful open-source alternatives for a variety of use-cases.
Each model is specifically designed and tailored to perform at optimal levels, keeping in mind various factors such as task complexity, processing power, response speed, cost efficiency, and output quality.
Was this page helpful?