stable-diffusion-v1-5@cjwbw

stable-diffusion with v1-5 checkpoint

sdxl-controlnet-depth@lucataco

SDXL ControlNet - Depth

depth-anything-v2@chenxwh

Depth estimation with faster inference speed, fewer parameters, and higher depth accuracy.

gfpgan-video@pbarker

GFPGAN for human face video upscaling

this-is-fine@zeke

Create your own variants of "this is fine" 🔥☕️🐕

ic_gan@meta

Instance-Conditioned GAN

t2i-adapter-sdxl-canny@adirik

Modify images using canny edges

anything-v3-better-vae@cjwbw

high-quality, highly detailed anime style stable-diffusion with better VAE

videocrafter@cjwbw

VideoCrafter2: Text-to-Video and Image-to-Video Generation and Editing

search-autocomplete@naklecha

an autocomplete api that runs on the cpu :)

rembg@cjwbw

Remove images background

bread@mingcv

The online demo of Bread (Low-light Image Enhancement via Breaking Down the Darkness). This demo is developed to enhance images with poor/irregular illumination and annoying noises.

pyglide@afiaka87

The predecessor to DALLE-2, GLIDE (filtered) with faster PRK/PLMS sampling.

realesrgan@lqhl

Image restoration and face enhancement

moondream1@lucataco

(Research only) Moondream1 is a vision language model that performs on par with models twice its size

test-endpoint@alexgenovese

Use Huggingface model name to test a ANY diffuser model

elden-ring-diffusion@cjwbw

fine-tuned Stable Diffusion model trained on the game art from Elden Ring

chatglm2-6b@nomagick

ChatGLM2-6B: An Open Bilingual Chat LLM | 开源双语对话语言模型

fadi-musicgen1@fadighawanmeh

Generate Arab Maqam Melodic Improvisations (Taqasim)

nous-hermes-2-yi-34b-gguf@kcaverly

Nous Hermes 2 - Yi-34B is a state of the art Yi Fine-tune, fine tuned on GPT-4 generated synthetic data