Show HN: TRELLIS.2 image-to-3D running on Mac Silicon – no Nvidia GPU needed
68 points by shivampkumar 4 hours ago | 9 comments
I ported Microsoft's TRELLIS.2 (4B parameter image-to-3D model) to run on Apple Silicon via PyTorch MPS. The original requires CUDA with flash_attn, nvdiffrast, and custom sparse convolution kernels: none of which work on Mac.

I replaced the CUDA-specific ops with pure-PyTorch alternatives: a gather-scatter sparse 3D convolution, SDPA attention for sparse transformers, and a Python-based mesh extraction replacing CUDA hashmap operations. Total changes are a few hundred lines across 9 files.

Generates ~400K vertex meshes from single photos in about 3.5 minutes on M4 Pro (24GB). Not as fast as H100 (where it takes seconds), but it works offline with no cloud dependency.

https://github.com/shivampkumar/trellis-mac


gondar 43 minutes ago
Nice work. Although this model is not very good, I tried a lot of different image-to-3d models, the one from meshy.ai is the best, trellis is in the useless tier, really hope there could be some good open source models in this domain.
reply
shivampkumar 12 minutes ago
Hey, thanks for sharing this. I'm sure TRELLIS.2 definitely has room to improve, especially on texturing.

From what I've seen personally, and community benchmarks, it does fair on geometry and visual fidelity among open-source options, but I agree it's not perfect for every use case.

Meshy is solid, I used it to print my girlfriend a mini 3d model of her on her birthday last year!

Though worth noting it's a paid service, and free tier has usage limitations while TRELLIS.2 is MIT licensed with unlimited local generation. Different tradeoffs for different workflows. Hopefully the open-source side keeps improving.

reply
kennyloginz 35 minutes ago
So much effort, but no examples in the landing page.
reply
shivampkumar 10 minutes ago
You're right, thanks for flagging this, let me run something and push images
reply
villgax 2 hours ago
That’s always been possible with MPS backend, the reason people choose to omit it in HF spaces/demos is that HF doesn’t offer an MPS backend. People would rather have the thing work at best speeds than 10x worse speeds just for compatibility.
reply
Reubend 2 hours ago
Are you saying the original one worked with MPS? Or are you just saying it was always theoretically possible to build what OP posted?
reply
refulgentis 2 hours ago
It’s always been possible, but it’s not possible because there’s no backend, and no one wants to it to be possible because everyone needs it 10x the speed of running on a Mac? I’m missing something, I think.
reply
hank808 24 minutes ago
Nothing much here. WTF is this near number 1 on the front page of HN?
reply
kennyloginz 18 minutes ago
Good question.
reply