With new Neural Accelerator, the iPhone 17 Pro delivers a 2× leap in inference speed. FLUX.1 completes in under 35 seconds, while larger 20B models like Qwen Image finish just over 45—all on-device.
What about SD3 Large 3.5 Model? Will this also gain performance with this new SOC? Which settings? App version?
What about SD3 Large 3.5 Model? Will this also gain performance with this new SOC? Which settings? App version?