Apple recently started rolling out its Deep Fusion camera technology which the company has described as “computational photography mad science” to the iPhone 11 lineup. It is an image processing system that works automatically behind the scenes in certain conditions.
As per the company, Deep Fusion is an advanced image processing system that uses the A13 Bionic Neural Engine to capture images with dramatically better texture, detail, and reduced noise in lower light. It doesn’t have a user-facing signal that Deep Fusion is being used as it is automatic and invisible.
However, the company has clarified that the feature won’t be used when using the ultra-wide lens, “Photos Capture Outside the Frame” mode, and when shooting burst photos. It’s noteworthy that the feature is only available on iPhone 11 and iPhone 11 Pro and Pro Max.
What is Deep Fusion?
During the launch of the new iPhone series smartphones last month, Apple VP Phil Schiller described the Deep Fusion feature as below:
“It shoots nine images before you press the shutter button it’s already shot four short images, four secondary images. When you press the shutter button it takes one long exposure, and then in just one second, the Neural Engine analyzes the fused combination of long and short images picking the best among them, selecting all the pixels, and pixel by pixel, going through 24 million pixels to optimize for detail and low noise, as you see in the sweater there… This is the first time a Neural Processor is responsible for generating the output image. It is computational photography mad science.”
How to use the Deep Fusion camera feature
- Make sure that you’ve updated your iPhone 11, 11 Pro, or 11 Pro Max to iOS 13.2
- Then go to Settings and then to Camera
- Now make sure Photos Capture Outside the Frame mode is ‘turned off’
- Make sure you’re using either wide or telephoto lens
- Deep Fusion is now working behind the scenes when you shoot photos