Technologies
iPhone 17 Pro Loses Fight Against the Oppo Find X9 Pro’s Camera
I didn’t expect Apple’s best phone to struggle so much against the Oppo Find X9 Pro.
The iPhone 17 Pro is unquestionably among the best camera systems available. It can take amazing images in all sorts of conditions with almost no effort on your part. But there are a number of top-end Android phones that pack serious photography setups, too — and the Oppo Find X9 Pro is just such a device. Its triple rear camera is potent, and capable of taking beautiful images from both its wide and 200-megapixel zoom cameras.
The Find X9 Pro is a powerhouse phone in all respects, which is why it scored so highly in my full review — and why it was given a coveted CNET Editors’ Choice Award. So to see just how it stacks up against the iPhone 17 Pro, I took it out on a series of photo missions around my beautiful home city of Edinburgh.
Before we dive in, a quick note about the images. They were all shot with each phone’s default camera mode in JPEG with no other settings applied (the Photographic Style on the iPhone was set to Standard). The images have been imported into Lightroom for the purposes of comparison and exporting at file sizes that will play nicely on the internet, but no other edits, sharpening or noise reduction have been applied.
Remember that while some decisions about which images look better might be obvious (such as a lack of detail or image processing aberrations), others will simply come down to personal opinion. I’m a professional photographer, so I typically look for an image that captures the scene more naturally. You may like a more vibrant image with high contrast, so take my findings with a pinch of salt.
With that said, let’s dive in.
Wide cameras comparison
Starting off with this easy snap overlooking the train tracks. Both phones have exposed their images above well but the Oppo’s shot has more natural warm tones on the brickwork on the wall — the iPhone’s look more magenta. The Oppo’s colors are more vibrant, too, but not overly so.
Switching to the ultrawide lens, the blue sky definitely looks oversaturated in the Oppo’s shot. And here’s where we have to dive deeper; Oppo’s image has had more digital sharpening applied to it, which helps some details look crisp, but it’s also got a lot of noise reduction, which smooths details in other areas.
If we look up close at this section of wall, we can see that the strong lines of mortar between the bricks look sharper in the Oppo’s photo on the right. But the bricks themselves look almost polished as they’ve been stripped of detail by the noise reduction. The iPhone’s image has retained that detail.
Another weird one to analyze. The wooden box of the library is unquestionably sharper on the Oppo’s shot, with even the minute scratches on the perspex being clearly visible. But as soon as we look further out toward the edges of the frame, that detail plummets.
Zooming in close on a section to the right side of the frame, it’s clear that the Oppo’s image severely lacks detail compared to the iPhone’s image. Whether this is an image processing issue or due to the quality of the lens, I’m not sure, but it’s surprising to see, especially given how sharp the rest of the image is.
This indoor shot on the main camera feels like a slightly easier win for the Oppo. Its image is brighter and colors look richer without being too punchy. As before, it both sharpens some areas and reduces texture in others. There’s a lack of detail toward the edge of the frame, but you’d only notice if you really get up close to the pixels. Overall, I prefer the look of the Oppo’s shot.
And it’s the same when I switched to the ultrawide lens — the Oppo takes the win here.
I love the balanced exposure from both phones in this vibrant outdoor scene, but I prefer the warmer tone of the Oppo’s shot. The iPhone’s photo looks like it saw all the golden colors and set its auto white balance on the cooler side to compensate. The Oppo produced a more true-to-life image and I think it’s a great shot as a result.
I don’t like the Oppo’s effort here, though. It artificially brightened the shadows way too much, giving this scene a fake HDR look that screams, «I took this on an Android phone.» The iPhone takes an easy win with its more natural handle on shadows.
I’m conflicted on this one. The Oppo’s shot is brighter and more vibrant, but it’s almost too much. The blue sky is a bit on the electric-blue side for my taste, while the buildings in the center of the frame look slightly too bright. Still, I think I prefer its rendition to the iPhone’s, which does look a little drab by comparison.
At 2x zoom, this indoor scene looks solid on both phones. Overall, I think the Oppo’s shot takes the win as it’s brighter and sharper than the iPhone’s.
Taking each phone up to its maximum default zoom levels (8x on the iPhone, 6x on the Oppo), the results look quite dramatically different. The color balance is wildly different for one thing, with the iPhone leaning more into teal tones while the Oppo’s photo has a more magenta cast to it. Honestly, neither one looks especially realistic, with both phones going a bit too hard in different directions. What I have noticed is that the Oppo’s image has gone overboard with the digital sharpening, resulting in a crunchiness to the details that I’m not a fan of.
The huge amount of digital sharpening on the Oppo’s shot is clear when you zoom in on the details.
This is an odd one; at max zoom, the Oppo has catastrophically failed to render the details on the side of the building.
Check out this detailed crop; I don’t know what the Oppo was doing in its image, but that building has been turned into a bizarre, smeary mess. The iPhone has done a superb job of capturing those distant fine details.
Seagulls on a log. There’s very little to choose between either phone in this example. Take your pick!
The Oppo Find X9 Pro does have a secret weapon when it comes to zoom, though, in the form of the Hasselblad telephoto zoom accessory. This optional lens attaches to the phone and gives huge zoom lengths — up to 40x — while retaining excellent quality. You can see the difference here in the maximum zoom range of the iPhone against the zoom of the Find X9 Pro with the lens attached; it’s both closer and sharper.
I absolutely love using the lens add-on for street photography, as you can get some great candid moments without anyone noticing. It’s worth keeping in mind, though, that the Hasselblad lens for the phone is an eye-watering £435 or $580 (based on a rough conversion of the 499 euro price), and third-party telephoto lenses from the likes of Sandmarc are also available for the iPhone.
Night photography
The iPhone’s night mode shot here does look brighter, but I prefer the richer contrast on the Oppo’s shot. Otherwise, it’s a pretty even match here.
But it’s a much easier win for the Oppo here. The deeper contrast has helped keep some of the flare from the lights at bay, while the details on the front of the building are much sharper.
This indoor scene is brighter, warmer and more vibrant on the Oppo and I much prefer it as a result.
The iPhone’s image is brighter here, especially in the sky, but if you zoom in on the details, the Oppo’s image is sharper.
And it’s basically the same story when you switch to the ultrawide lens.
When we jump to the zooms, though, the Oppo has ramped up the sharpening again, resulting in an image that looks rather over-processed.
I caught a glorious sunset on one evening but only the iPhone managed to do it justice. I love the iPhone’s natural tones and deep shadows, whereas the Oppo has delivered an oversaturated shot that looks like I’ve applied a tacky filter before posting it to Instagram.
And it’s the same here with the Oppo’s shot looking saturated against the iPhone’s more realistic version.
But the difference was most obvious when using the zoom lenses on both phones. The iPhone’s shot not only has more natural colors, but the Oppo’s heavy-handed processing has given the lighthouse an unpleasant halo (a light haziness around its edges) that really spoils the shot.
I ended on a selfie and here both phones went in interesting directions. The Oppo is certainly the winner to my eye — it’s shot is considerably sharper (without overdoing it) with more natural skin tones and an accurate orange hue on my jacket. The background is a bit overly cyan but it’s certainly a better-looking attempt than the iPhone’s.
iPhone 17 Pro vs. Oppo Find X9 Pro: Which takes better photos?
I was surprised at the results. Oppo’s phones — and its sister company OnePlus’s phones — have had a history of leaning hard into image processing with often wildly brightened shadows, too much sharpening and inaccurate colors that resulted in shots that were only really okay for casual snaps. The Find X9 Pro does have some of that (the image of the red restaurant front is a particularly egregious example of shadow brightening) but it’s way more toned down than I expected.
In fact, it delivered shots in many instances that I preferred over the iPhone’s. The golden hues of the tree-lined pathway shot looked sublime on the Oppo, while the warmer, brighter tones inside the pub were a clear victory for the X9 Pro. Most of the images from the Oppo’s main camera I preferred over the iPhone’s, including some at night. It wasn’t a win in every instance and it just goes to show that each phone’s image processing will still trip up in different scenarios.
But overall, I think I have to give the win to the Oppo Find X9 Pro. Its ability to capture scenes accurately with just enough processing to help give images that little pop but without going overboard is admirable. It’s safe to say then, if you’re looking for a high performance Android camera phone, the Find X9 Pro is certainly one to consider.
Technologies
We’re All Flailing With AI: I Tried Art That Pokes Back at the Chaos
A handful of moments at SXSW had me wondering: How much of AI is me playing a game and how much is it a game playing me?
Smack dab in the middle of this year’s SXSW festival in Austin, Texas, there was a huge dirt hole in the ground, blocks wide, where there used to be a convention center. The festival’s events continued around it in hotels, but the building’s absence was like a lurking symbol. Of chaos, of disruption. Of the world in 2026, dealing with AI and everything else.
I have no idea what the rest of 2026 will bring, but the vibe I felt at a vibe-filled show made me question how AI can work with our lives, our art and our existence. Instead of fighting it, the conference awkwardly embraced it and challenged it. I saw pockets of work all over the place and wondered about it. Conversations. And how to escape it.
Everyone’s trying to handle a world that’s suddenly way too overloaded with AI, generating documents, images, deepfakes and music, injecting assistant agents into our operating systems, even launching entire unleashed and interconnected agent systems all talking to each other on their own social networks. Job-threatening, constantly shifting, training on our data and aiming for our faces. Do we run from it, try to destroy it, or use art to question and challenge it?
SXSW gave me a lot of the latter, in different slices.
In my panel I was in at SXSW with Meow Wolf’s Vince Kadlubek and Niantic Spatial’s Dennis Hwang about their experiments overlaying tech into art in physical installations, Kadlubek discussed how AI’s infinite slop creative tool becomes uninteresting over time, while intentional art counteracts that. And that’s exactly how I felt moving through intentionally-made experiences that turned my thoughts about AI inside out, all in different ways.
AI seeping into our gaming chats, for better and worse
In a VR headset in a hotel ballroom, I chatted with cartoon fantasy characters in a whimsical game called Fabula Rasa: Dead Man Talking, made by game studio Arvore. I could make any request or beg as much as I wanted from my cage, where I was held prisoner for offending the King and kept dangling over a monster’s mouth for execution. Could I plead my case to them? The cartoonish VR characters responded, but via generative AI improvising off a script from a writing team, using Claude.
The chats were fun, ridiculous. I made myself an irresponsible magician and leaned into improv with the characters who approached me. None of them disappointed, which is a surprise for dialogue that’s somewhat AI-generated. Most interactions felt frazzled and absurd, but it worked for the style and the humor of it all. There was a bit of a delay for responses to kick in, though, standard-issue for a lot of AI conversations.
This was the best use of AI I saw. But what could it mean for future games, like RPGs? It’s an unsettling thought if you’re a writer…or, exciting. Indie games could end up finding ways to branch out responsive dialogue in ways that still feel custom-written and crafted. I don’t know.
On the less successful end was Love Bird, an interactive game show experience directed by Cameron Kostopoulos. I was wowed by the initial onboarding, where the «producers» called me on my phone to interview me. The producer was actually an AI chatbot with a surprisingly rapid response time. I convinced the AI to be a participant, and then was led into a room where I spoke via Xbox controller and headset microphone with a PC game on a monitor, where I was competing with others while carnivorous bird-people threatened to eat us. I’m not sure why, exactly. And I don’t know how it all ended, because my chats with the host and participants fell into broken loops that made us have to quit out early.
Love Bird was fast-paced and responsive, but also too chaotic and weird, even for someone like me who likes weird. It didn’t feel like it was really paying attention to me, and I didn’t feel like I had space to process. Maybe that’s by chaotic design, but after emerging, it just made me want to feel less AI-spammed and have games that didn’t flood me with as much conversation as this one did. I needed a quiet space. My favorite immersive experiences are often the quiet ones, not the chatty ones.
AI as a personal transformational lens
In one room, I stood at a podium and read a portion of New York Mayor Zohran Mamdani’s acceptance speech from November as, before me, video clips of crowds cheering played on a large video monitor, seemingly reacting to me. A few minutes later, I heard my voice delivering more of Mamdani’s speech, AI-generated in my voice, to film clips of inspirational moments of support. I saw my own face layered into the background of some of these clips, too.
The Great Dictator, directed by Gabo Arora, is a museum-style participatory exploration of the power of rhetoric, provocatively named for the Charlie Chaplin satire about Adolf Hitler. The three speeches you can choose from — Mamdani’s, President Ronald Reagan’s on taking down the Berlin Wall, and Malcolm X’s The Ballot or the Bullet speech — are all picked to represent powerful moments in history, and the exhibit is about embodying history and feeling the power of speech and rhetoric in a personal way — and relating to it from a new, personal, and maybe more empathetic angle. The voice AI was generated by ElevenLabs, and the video clips at the end were hand edited, but with AI overlays of my face handled by Runway. What surprised me was how much I ended up being in historical documents. Is this a deepfake? Is it embodiment? Is it both?
Another art experience embedded me into the work: Spectacular, by Jonathan Yeo. Yeo is an artist from London whose portrait work includes King Charles III, President George W. Bush and designer Jony Ive of Apple renown and has played with tech in many of his installations. This gallery at SXSW, replicated from an exhibit that was in Paris before, used Snap Spectacles AR glasses to melt the real portraits with augmented effects and voice narration from Yeo. And, later on, the portraits began overlaying my own face, transformed in art styles that matched Yeo’s using generative AI trained on his work. At the end, I got a printout of my portrait, «signed» by Yeo himself.
I spoke with Yeo in Austin after experiencing his work. He admitted that AI is a provocation here, but that he wants to own the process that AI is trying to take from our own data everywhere. And he’s trying to apply AI and AR in ways that feel intentional and subtle as ways to help play with and bring the art to life, in museums and elsewhere. But again, like with The Great Dictator, I wondered: How much will «permanent» documents of art and history begin to melt over time with AI? What will be kept intact, and who will enforce the line?
AI as broken manipulator
Wearing a pair of Meta Oakley smart glasses, I stood in a room full of objects on shelves as a voice directed me to open a drawer, find a dollar bill there and put it in a shredder filled with bill fragments. I did it. The AI remarked with pleasant surprise at how compliant I was. From there, I «competed» tasks to prove my value as human labor, graded by an AI that saw my actions through the glasses camera and showed my stats on a TV screen, along with a deepfaked dancing version of myself.
Body Proxy, by Tender Claws, applies Meta’s glasses camera feed into its own art AI app on a phone to explore how AI could make us proxies for physical labor. It’s weird and satirical like some of their other VR work (the game Virtual Virtual Reality, among others), but also pushes at a much bigger question: How much is AI breaking us or manipulating us? How much are we willing to be manipulated?
Escape The Internet (Part One), an interactive game I played in a movie theater at the Alamo Drafthouse, turned similar ideas of manipulation into a social experiment. Created by Lucas Rizzotto, another VR/AR provocateur artist, it involved no headsets or glasses. Instead, everyone in the theater used their own phones to connect to a private server that «ran» the game and gave us little personal avatars, feeding us surveys to collect our personal tendencies and then having us play social voting games to see how we’d polarize on decisions like, for instance, who to kill: one person who shared our political views, or five who didn’t?
It’s all absurd and funny and guided by Rizzotto’s in-person guidance at the front of the theater, and along the way, I thought about how social platforms manipulate us with algorithms. Here, in this room together, we’re encouraged to find each other, recognize each other and love each other. The experience has branching paths and can be replayed, and could re-emerge in future conferences and events. But, again, I asked myself: How much of AI is a game that’s playing me, instead of me playing it?
Design for AI is still unfinished (or nonexistent)
In some of the panels I sat in on, and in conversations I had, I got a creeping sense that AI is moving too fast for artists or ethicists — or anyone else, really — to stop and properly process. One panel exploring The Future Design Language of Robots, with Olivia Vagelos of the Design for Feelings Studio, and Savannah Kunovsky, managing director of Ideo’s emerging technology division, tapped into the assumptions we make about robots. I teamed up with someone next to me to try to dream up ideas to break my assumptions and think freshly about what robots could be.
Kunovsky and Vagelos both agreed that designing for AI presents similar challenges right now, particularly because the tech is moving too fast for design to properly attend to it. But sadly, my attempt to record what they said as a quote was sabotaged by my AI-enabled Meta Ray-Ban glasses, which activated as the microphone when I tried recording a voice memo from the panel on my phone, muting the audio completely because of noise cancellation. Wearables are still broken, too.
Another panel, called Generative Ghosts: AI Afterlives and the Future of Memory, led in part by two Google DeepMind researchers, discussed many fascinating angles on how we can responsibly handle archiving our lives via AI as memories in the future, and who controls that ability. The panel had no specific answers but plenty of questions. And, as my own attempt at recording it was also erased by my activated smart glasses, it gave me an additional level of absurd friction which made me wonder: Will these archived memories eventually be lost, too, from big tech companies that sunset services or introduce noncompatible formats, memory-holing the memories?
AI is threatening, but often not successful in fulfilling its promises (or threats). Self-driving Waymo cars flooded Austin during SXSW, with my Uber app often pushing them on me instead of human drivers. I gave in and took a few for amusement, but they usually took longer to get where I was going. And, one unfortunate evening, my Waymo took a weird roundabout route that ended up dropping me off a half mile from my destination on the wrong side of the highway.
My favorite SXSW memory was making an old-fashioned collage out of magazine clippings with friends at an art gallery over wine, something that involved no tech at all. We worked our magic with intuition, scissors, old magazines and good conversation. Was it perfect? No. But it cost a lot less than generative AI. Which also makes me wonder if all these AI tools being offered to enhance or supplant creativity are necessary, or whether we’ll just rediscover that we had more tools than we realized all along.
Technologies
Samsung’s Galaxy A37, A57 New Pricing Tests the Limits of a Plastic Phone
While market conditions are raising the cost of these Galaxy A phones, Samsung’s hopes fast charging speeds, improved water resistance and camera features will provide value for price-conscious buyers.
Samsung’s announcement of the new $450 Galaxy A37 and $550 Galaxy A57 today brings good news and bad news for value-conscious customers looking for a cheaper phone.
Much like we’ve seen on the flagship-level Galaxy S26, both phones are priced higher than the A36 and A56 they are replacing — in this case by $50 — though storage options for both phones still start at 128GB. However, both phones did get a design improvement that features IP68 water resistance and will feature the newly updated Circle to Search, with enhancements like Find the Look for identifying outfits.
Starting with the $450 Galaxy A37, this phone has a 6.7-inch display with a 120Hz refresh rate. It runs on Samsung’s Exynos 1480 processor and has a 45-watt wired charging speed, which Samsung says will recharge its 5,000-mAh battery from 0% to 65% in 30 minutes.
The phone is made from plastic and comes in four colors: charcoal, gray-green, white and — my favorite — lavender. (Note: Samsung adds the word «Awesome» in front of all of these color names, but I’m going to save us from this.) The A37 also comes in a 256GB model that costs $540.
The A37’s cameras include a 50-megapixel wide, an 8-megapixel ultrawide and a 5-megapixel macro on the back, along with a 12-megapixel selfie camera on the front. The A37 gets a sampling of Galaxy AI features, including object eraser for editing photos, language translation and an upgraded Bixby assistant.
The $550 Galaxy A57 moves up from plastic to a metal body but only comes in navy. It also has a 6.7-inch display, but weighs in at 179 grams, which is markedly lighter than the A56’s 198g. During my hands-on time, it was noticeably light, especially for a phone with the larger display size.
The phone runs on Samsung’s Exynos 1680 processor. It also gets a few more AI photo editing tools like Best Face for fixing group photos where someone is blinking.
The cameras on the A57 include a 50-megapixel wide, a 12-megapixel ultrawide, and a 5-megapixel macro on the back and, like the A37, includes a 12-megapixel selfie camera. A step-up 256GB model costs $610, but it’s worth noting that this price is really close to the $650 Galaxy S25 FE, which includes all of the Galaxy AI features along with a telephoto camera.
I’m bummed but not surprised to see the increased cost of the A37 and A57 versus last year’s models, which a Samsung representative said is attributable to current market conditions when I asked about the ongoing RAM shortage.
During my hands-on time, though, I did find both phones to look quite nice, with the lavender model likely providing plenty of competition to the $499 Google Pixel 10A’s colors. Both phones will go on sale on April 9.
Technologies
Today’s NYT Strands Hints, Answers and Help for March 25 #752
Here are hints and answers for the NYT Strands puzzle for March 25, No. 752.
Looking for the most recent Strands answer? Click here for our daily Strands hints, as well as our daily answers and hints for The New York Times Mini Crossword, Wordle, Connections and Connections: Sports Edition puzzles.
Today’s NYT Strands puzzle is a fun one, but it might make you hungry. Some of the answers are difficult to unscramble, so if you need hints and answers, read on.
I go into depth about the rules for Strands in this story.
If you’re looking for today’s Wordle, Connections and Mini Crossword answers, you can visit CNET’s NYT puzzle hints page.
Read more: NYT Connections Turns 1: These Are the 5 Toughest Puzzles So Far
Hint for today’s Strands puzzle
Today’s Strands theme is: Intermission mission.
If that doesn’t help you, here’s a clue: Movie candy.
Clue words to unlock in-game hints
Your goal is to find hidden words that fit the puzzle’s theme. If you’re stuck, find any words you can. Every time you find three words of four letters or more, Strands will reveal one of the theme words. These are the words I used to get those hints but any words of four or more letters that you find will work:
- ROBE, BORE, WEEDS, WEED, RENT, RIND, CORN, SCAN, SPAN, SPANS, SAND, CANE, CANT, CROSS, COIN
Answers for today’s Strands puzzle
These are the answers that tie into the theme. The goal of the puzzle is to find them all, including the spangram, a theme word that reaches from one side of the puzzle to the other. When you have all of them (I originally thought there were always eight but learned that the number can vary), every letter on the board will be used. Here are the nonspangram answers:
- BEER, SODA, CANDY, FRIES, WATER, POPCORN, PRETZEL
Today’s Strands spangram
Today’s Strands spangram is CONCESSIONS. To find it, start with the C that’s three letters to the right on the top row, and wind down.
Toughest Strands puzzles
Here are some of the Strands topics I’ve found to be the toughest.
#1: Dated slang. Maybe you didn’t even use this lingo when it was cool. Toughest word: PHAT.
#2: Thar she blows! I guess marine biologists might ace this one. Toughest word: BALEEN or RIGHT.
#3: Off the hook. Again, it helps to know a lot about sea creatures. Sorry, Charlie. Toughest word: BIGEYE or SKIPJACK.
-
Technologies3 года agoTech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies3 года agoBest Handheld Game Console in 2023
-
Technologies3 года agoTighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года agoBlack Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies5 лет agoGoogle to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies5 лет agoVerum, Wickr and Threema: next generation secured messengers
-
Technologies4 года agoOlivia Harlan Dekker for Verum Messenger
-
Technologies4 года agoThe number of Сrypto Bank customers increased by 10% in five days
