Technologies
ProRes Log Video on iPhone: What Is It and Why You Should Use It
From how to use it to what it actually is, here’s what to know about using Log video on the iPhone 15 Pro and 16 Pro.

The iPhone 16 Pro packs an incredible camera setup for both still images and for video production — including its fun 4K slow-motion mode. But to help it capture pro-standard video footage, it also supports shooting in a Log color profile with Apple ProRes encoding, just like the iPhone 15 Pro and Pro Max did before it. That might sound like a baffling string of jargon (because it is), so in case you’re not a professional video producer, what it all really boils down to is that it allows you to shoot professional-looking cinematic video footage using just your iPhone.
But what do the terms Log and ProRes actually mean? How are they better than your phone’s regular video? And, crucially, should you actually use them when recording your own videos? Here’s everything you should know for getting the best video quality in an iPhone.
Read more: The iPhone 16 Pro’s High-Res Slow-Motion Video Is the Best Apple Feature in Years
What is ProRes?
ProRes is a video codec created by Apple in 2007 that has been widely adopted by video and cinema professionals. Typically found on high-end video cameras costing many thousands of dollars, ProRes files capture more data when shooting, resulting in better quality footage than you’d typically get from a phone or even some dedicated cameras.
What is Log video?
Log (short for «logarithmic») is a color profile found on some professional video cameras and which is now also available on the iPhone 15 Pro and Pro Max (along with an increasing number of Android phones, including the Galaxy S25 Ultra). Log footage preserves more image information in the highlights and shadows, allowing for greater flexibility when it comes to editing colors and contrast in post production.
How do you turn on ProRes and Log video?
On your iPhone 16 Pro or 15 Pro, go into Settings, then scroll down and tap Camera. Then tap Formats, and within this sub-menu you’ll see a section for Video Capture. Toggle Apple ProRes to on and below will be the options for ProRes Encoding. Tapping on it will allow you to switch between HDR, SDR or Log.
Bear in mind that while you can toggle ProRes on or off directly in the Camera app, you have to go back to the Settings app if you want to switch from Log to HDR or vice versa. You can shoot 4K footage at 30 frames per second on the phone, but if you want to shoot at 60 frames per second, you’ll need to connect an external SSD drive via USB-C and record directly to that. While you can shoot ProRes footage without Log, you can only shoot Log with ProRes.
Why does Log video look gray and washed out?
Log files straight out of the camera look flat and have low contrast and low saturation. The files are designed to be edited in programs like Adobe Premiere or DaVinci Resolve, where colorists will bring back contrast and color tone according to the look they’re trying to achieve, a process called color grading.
Read more: Best iPhone Camera Accessories for Photos and Videos
The low-contrast look of ungraded footage gives colorists the best starting point to tweak the video image however they want. Log footage always needs to be edited and graded before being used.
How do I edit Log video?
While Apple has yet to implement specific color editing tools for Log footage on the iPhone (which it should), you can get some of the way there using the exposure tools in the ‘Edit’ options in the Photos app. However, you’ll get your best results by transferring the files to your iPad, Mac or Windows PC and editing in dedicated video production apps.
My favorite is DaVinci Resolve by BlackMagic, which is an industry-standard piece of software used in professional productions and Hollywood films. It’s known for its flexibility with editing color, and I loved using it to see what looks I can achieve from video footage from both the iPhone 15 Pro Max and BlackMagic’s own Pocket Cinema Camera.
Resolve is available on Macs and PCs but there’s also an excellent iPad app version. Best of all, the software is free to use on all platforms, with only some advanced features requiring the paid-for Studio version. But anyone wanting to spice up their footage will find the free version more than capable.
BlackMagic has also launched a color-editing panel designed to be used with the iPad. The Micro Color Panel gives fine grain control over color editing in Resolve and allows you to quickly edit your footage using the same pro hardware used on Hollywood movies.
Can my phone shoot ProRes Log video?
Apple introduced the ability to shoot with the ProRes codec on the iPhone 12 Pro, but right now only the most recent iPhone 16 Pro, Pro Max and iPhone 15 Pro and Pro Max can also shoot in Log.
Do you have to use the iPhone camera app to shoot ProRes Log video?
No, Apple has opened up this feature to third-party apps. My recommendation is the BlackMagic Camera app, which gives the same level of control over settings as you’d find on the company’s professional cameras. It’s a superb tool for getting the best-looking video out of your phone and, like DaVinci Resolve, it’s free.
Should I shoot video in ProRes Log?
ProRes footage in Log profile is very specialized. It requires additional time in post production to color grade the footage, and the file sizes are many times larger than regular video files. If you just want to shoot footage of your family gathering or your mates at the beach to upload to Instagram or YouTube, then you don’t need to worry about ProRes or Log.
However if you want to use your iPhone 16 Pro as a professional video production tool and you have the time and resources to color grade and edit your footage, then you should absolutely give it a go. The flexibility of recording allows you to get video out of your iPhone that would give dedicated cinema cameras a run for their money, and it makes the iPhone 16 Pro an exceptionally powerful camera for content creators looking to add some professional flair to their videos.
Technologies
FCC Wants to Ban Some Chinese Labs From Testing US-Bound Electronics
The head of the FCC said that allowing some testing labs access to products headed to the US market poses a security risk.
The Federal Communications Commission will vote this month on whether to ban some testing labs based in China from approving electronics products meant for import to the US.
The May 22 vote could affect products ranging from smartphones to game consoles to cameras that manufacturers pay to have tested for safety, performances and standards such as radio frequency interference. According to the FCC, 75% of this kind of electronics testing is done out of labs based in China.
«While the FCC now includes national security checks in our equipment authorization process, we have not had rules on the books that require the test labs conducting those reviews to be trustworthy actors,» wrote Brendan Carr, chair of the FCC, in a post about the vote and other upcoming FCC actions.
Carr said currently «bad labs» participate in the approval and testing process, a loophole he said the vote would help close.
«The order would adopt a rule that prohibits test labs from participating in the FCC’s equipment authorization process if they are owned, controlled, or directed by entities that pose national security risks,» Carr said.
According to Carr’s post, the FCC is also opening comments for a separate action that would put together a list of what he called «regulated entities that are subject to the control of a foreign adversary.»
The moves could keep more electronics made or tested in China from reaching the US. In 2022, the FCC banned Huawei and ZTE electronics over the same kinds of national security risks. Since then, steep tariffs have been enacted against China in an ongoing trade war with the country.
Technologies
Best iPhone 16E Deals: Trade in an Old Phone for Huge Discounts From Verizon, T-Mobile, AT&T and More
Technologies
Gemini Live Gives You AI With Eyes, and It’s Awesome
When it works, Gemini Live’s new camera mode feels like the future in all the right ways. I put it to the test.
Google’s been rolling out the new Gemini Live camera mode to all Android phones using the Gemini app for free after a two-week exclusive for Pixel 9 (including the new Pixel 9A) and Galaxy S5 smartphones. In simpler terms, Google successfully gave Gemini the ability to see, as it can recognize objects that you put in front of your camera.
It’s not just a party trick, either. Not only can it identify objects, but you can also ask questions about them — and it works pretty well for the most part. In addition, you can share your screen with Gemini so it can identify things you surface on your phone’s display. When you start a live session with Gemini, you now have the option to enable a live camera view, where you can talk to the chatbot and ask it about anything the camera sees. I was most impressed when I asked Gemini where I misplaced my scissors during one of my initial tests.
«I just spotted your scissors on the table, right next to the green package of pistachios. Do you see them?»
Gemini Live’s chatty new camera feature was right. My scissors were exactly where it said they were, and all I did was pass my camera in front of them at some point during a 15-minute live session of me giving the AI chatbot a tour of my apartment.
When the new camera feature popped up on my phone, I didn’t hesitate to try it out. In one of my longer tests, I turned it on and started walking through my apartment, asking Gemini what it saw. It identified some fruit, ChapStick and a few other everyday items with no problem. I was wowed when it found my scissors.
That’s because I hadn’t mentioned the scissors at all. Gemini had silently identified them somewhere along the way and then recalled the location with precision. It felt so much like the future, I had to do further testing.
My experiment with Gemini Live’s camera feature was following the lead of the demo that Google did last summer when it first showed off these live video AI capabilities. Gemini reminded the person giving the demo where they’d left their glasses, and it seemed too good to be true. But as I discovered, it was very true indeed.
Gemini Live will recognize a whole lot more than household odds and ends. Google says it’ll help you navigate a crowded train station or figure out the filling of a pastry. It can give you deeper information about artwork, like where an object originated and whether it was a limited edition piece.
It’s more than just a souped-up Google Lens. You talk with it, and it talks to you. I didn’t need to speak to Gemini in any particular way — it was as casual as any conversation. Way better than talking with the old Google Assistant that the company is quickly phasing out.
Google also released a new YouTube video for the April 2025 Pixel Drop showcasing the feature, and there’s now a dedicated page on the Google Store for it.
To get started, you can go live with Gemini, enable the camera and start talking. That’s it.
Gemini Live follows on from Google’s Project Astra, first revealed last year as possibly the company’s biggest «we’re in the future» feature, an experimental next step for generative AI capabilities, beyond your simply typing or even speaking prompts into a chatbot like ChatGPT, Claude or Gemini. It comes as AI companies continue to dramatically increase the skills of AI tools, from video generation to raw processing power. Similar to Gemini Live, there’s Apple’s Visual Intelligence, which the iPhone maker released in a beta form late last year.
My big takeaway is that a feature like Gemini Live has the potential to change how we interact with the world around us, melding our digital and physical worlds together just by holding your camera in front of almost anything.
I put Gemini Live to a real test
The first time I tried it, Gemini was shockingly accurate when I placed a very specific gaming collectible of a stuffed rabbit in my camera’s view. The second time, I showed it to a friend in an art gallery. It identified the tortoise on a cross (don’t ask me) and immediately identified and translated the kanji right next to the tortoise, giving both of us chills and leaving us more than a little creeped out. In a good way, I think.
I got to thinking about how I could stress-test the feature. I tried to screen-record it in action, but it consistently fell apart at that task. And what if I went off the beaten path with it? I’m a huge fan of the horror genre — movies, TV shows, video games — and have countless collectibles, trinkets and what have you. How well would it do with more obscure stuff — like my horror-themed collectibles?
First, let me say that Gemini can be both absolutely incredible and ridiculously frustrating in the same round of questions. I had roughly 11 objects that I was asking Gemini to identify, and it would sometimes get worse the longer the live session ran, so I had to limit sessions to only one or two objects. My guess is that Gemini attempted to use contextual information from previously identified objects to guess new objects put in front of it, which sort of makes sense, but ultimately, neither I nor it benefited from this.
Sometimes, Gemini was just on point, easily landing the correct answers with no fuss or confusion, but this tended to happen with more recent or popular objects. For example, I was surprised when it immediately guessed one of my test objects was not only from Destiny 2, but was a limited edition from a seasonal event from last year.
At other times, Gemini would be way off the mark, and I would need to give it more hints to get into the ballpark of the right answer. And sometimes, it seemed as though Gemini was taking context from my previous live sessions to come up with answers, identifying multiple objects as coming from Silent Hill when they were not. I have a display case dedicated to the game series, so I could see why it would want to dip into that territory quickly.
Gemini can get full-on bugged out at times. On more than one occasion, Gemini misidentified one of the items as a made-up character from the unreleased Silent Hill: f game, clearly merging pieces of different titles into something that never was. The other consistent bug I experienced was when Gemini would produce an incorrect answer, and I would correct it and hint closer at the answer — or straight up give it the answer, only to have it repeat the incorrect answer as if it was a new guess. When that happened, I would close the session and start a new one, which wasn’t always helpful.
One trick I found was that some conversations did better than others. If I scrolled through my Gemini conversation list, tapped an old chat that had gotten a specific item correct, and then went live again from that chat, it would be able to identify the items without issue. While that’s not necessarily surprising, it was interesting to see that some conversations worked better than others, even if you used the same language.
Google didn’t respond to my requests for more information on how Gemini Live works.
I wanted Gemini to successfully answer my sometimes highly specific questions, so I provided plenty of hints to get there. The nudges were often helpful, but not always. Below are a series of objects I tried to get Gemini to identify and provide information about.
-
Technologies2 года ago
Tech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies2 года ago
Best Handheld Game Console in 2023
-
Technologies2 года ago
Tighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года ago
Verum, Wickr and Threema: next generation secured messengers
-
Technologies4 года ago
Google to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies4 года ago
Olivia Harlan Dekker for Verum Messenger
-
Technologies3 года ago
Black Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies4 года ago
iPhone 13 event: How to watch Apple’s big announcement tomorrow