License Desk

Choose the model you can actually ship.

The best model is not always the largest one. We look at task fit, license restrictions, GPU class, VRAM, tuning time, and deployment route before recommending a path.

Model Selector

Tell us what the model needs to do

Paste any public Hugging Face model ID for live metadata, logo, license, architecture, VRAM, and timing estimates.

Select a task to see the recommended model family, license path, processor class, VRAM, and processing time.
Commercial-friendly

Apache 2.0

Good for: Good default for SMB tools, internal assistants, and customer-facing products.

Watch: Keep notices and attribution. Check the actual model card for dataset or trademark notes.

Read license
Permissive

MIT

Good for: Usually straightforward for commercial use and derivative work.

Watch: Keep copyright and permission notices. Model cards may add operational guidance.

Read license
Commercial with conditions

Llama Community License

Good for: Useful for serious assistants and multimodal workflows when Meta access is approved.

Watch: Accept Meta terms, respect usage restrictions, and review any scale thresholds.

Read license
Open model with responsible-use terms

BigCode OpenRAIL-M

Good for: Strong fit for code assistants and repository-specific completion.

Watch: Generated code may need attribution review. Check policy restrictions for harmful use.

Read license
Route-specific

Gemma / Gemini Terms

Good for: Gemma is the open-weight tuning path. Gemini is the managed API path.

Watch: Gemini API is not a downloadable local model. Gemma and Gemini have different terms.

Read license
Depends on the model

Custom Hugging Face license

Good for: Best for specialized languages, industries, vision-language, embeddings, or domain models.

Watch: We inspect the model card, license field, gated status, architecture, and commercial restrictions first.

Read license