Explore open source artificial intelligence and machine learning products. From LLMs to AI infrastructure, enjoy AI innovation without vendor constraints.
Developed by Meta, this large model is competitive with proprietary systems, known for its strong performance across various tasks.
From Mistral AI, notable for its efficiency and competitive performance.
Utilizes a hybrid Transformer-Mamba mixture of expert architecture, offering high throughput and low memory usage with an effective context length of 256K tokens.
A model focusing on following user directives accurately, specializing in few-turn interactions.
Employs a sparse mixture of expert architecture, engaging 39 billion active parameters per token, balancing performance and efficiency.
Designed for general-purpose tasks.
Leading space under the cc-by-nc-4.0 license, this model is tailored to long context and multi-step agentic RAG with tool use.
From NVIDIA, a large general-purpose model with extensive alignment.
Designed for efficient processing with strong performance in various applications.
A model optimized for multilingual support and diverse task performance.
Utilizes a sparse mixture of experts architecture with 8 experts of 7B parameters each, engaging 12.9B active parameters per token for efficient processing.
A model specialized in code generation and understanding, supporting multiple programming languages.
Qwen with Questions, a preview model focused on internal-dialog-like logical reasoning.
A smaller model designed for code generation across various programming languages.
Mistral AI’s model focused on code generation, surpassing larger models like Llama3 70B in the HumanEval FIM benchmark.
Variants of Meta’s Llama models fine-tuned for code generation and understanding.
A model optimized for code generation and conversational AI, facilitating interactive coding assistance.
A model combining the SigLIP-So400m vision encoder with the Gemma-2B language model, designed for versatile vision-language tasks.
Developed by CogAI, focusing on integrating visual and textual data for comprehensive understanding.
An enhanced version of Qwen-VL, incorporating a Vision Transformer (ViT) for seamless image and video input processing, achieving superior performance across various tasks.
A series of multimodal models available in 7B and 72B parameter sizes, designed for advanced vision-language understanding.
A multimodal model trained to understand both natural images and documents, achieving leading performance on various multimodal benchmarks without compromising on text capabilities.
The smallest of Meta’s Llama 3.1 series, offering competitive performance in a lightweight package, suitable for various applications.
A compact hybrid Mamba2-Transformers model optimized for efficiency, offering reduced memory overhead, faster inference, and superior performance across benchmarks compared to other models in its size class.
High-performance machine learning with automatic differentiation.
A leading deep learning framework.
Distributed computing made easy.
Flexible parallel computing.
Democratizing AI development (some models require purchased licensing).
Streamlining deep learning research (some models require purchased licensing).
Real-time data streaming.
Unified analytics for large-scale data.
Stream processing at its finest.
Workflow automation made simple.
Parallel computing for large datasets.
Interactive querying at scale.
Advanced search and analytics.
Modern table format for data lakes.
Your foundation for scalable AI operations.
Streamlining package management and environment creation for reproducible workflows.
Simplifying deployment and inference at scale.
We make all the pieces work together. Whether you’re building pipelines, deploying models, or integrating AI solutions, our team ensures smooth implementation for your stack.
Looking for something specific or need guidance on choosing the right tools? We want to hear from you!