Technologies
PSVR 2: Price, Games and Everything You Need to Know
The Sony PSVR 2 headset is on sale now for $550. We’ve reviewed it and played its games. Here’s what you need to know.
Sony’s first big accessory for the PlayStation 5, the PlayStation VR 2 headset, is here. We just reviewed it: It’s one of the best immersive gaming experiences we’ve ever had, and has some great games already, but it also costs more than the PlayStation 5 itself. Yeah, you read that right: The PSVR 2 costs $550. It’s expensive, but if you’re OK with tethering to a PlayStation 5 with a long cable, this could be the hardware for you, as opposed to waiting for the Meta Quest 3 or Apple’s expected VR headset.
The PSVR 2 isn’t wireless, but its higher-res HDR OLED display and advanced vibrating controllers, along with perks like eye tracking and in-headset rumble, give it a lot of hardware advantages. But its game library feels a little incomplete at the moment. If you’re interested in one, you might want to wait and see what other games arrive throughout the year.
Also, if you’re interested in comparing it to the Quest 2 (the most popular VR headset of the last few years), we’ve got you covered there too.
Sony has a whole FAQ library for tips and advice on the PSVR 2, which you should reference and dive into. Below are additional observations based on our time with it so far.


This is what comes inside the package: controllers, headset, earbuds and an extra USB-C-to-A cable.
Scott Stein/CNETWhat’s in the box?
The PSVR 2 retail package has a cabled headset, a pair of Sense controllers, a pair of earbuds that connect to a headphone jack on the headset and a USB-C-to-A cable for charging the controllers and to initially pair to the PS5. A $50 charging dock, which can optionally charge up both your controllers at once, is sold separately.
You need a PS5 to use it
The PSVR 2 isn’t a stand-alone, self-contained headset like Meta’s Quest 2 (also known as the Oculus Quest 2) or Quest Pro. That means you’ll need to tether it to a PlayStation 5 (and own a PS5) to use it.
The PlayStation VR 2 looks, in a lot of ways, like the headset we wanted for the PS5 all along. It’s a long-awaited update to the PlayStation VR that Sony released for the PlayStation 4 back in 2016: A new design with a color scheme that matches the PS5, and a headband-type visor that’s similar to but smaller than Sony’s first PSVR. The high-res, vibrating, camera-equipped, eye-tracking capabilities of Sony’s second-gen PlayStation headset look like they fit the top-end specs anyone would dream of. However, the new PSVR 2 isn’t automatically backward-compatible with all the older PSVR games — the games will need to be updated by their developer in order to work.


The PSVR 2 needs a PS5. It’s cabled to it. But once you’ve set it up, you don’t need a TV.
Scott Stein/CNETThe PSVR 2 works much like other VR headsets, but with greatly improved display technology, eye tracking and advanced vibrating haptics and triggers in the controllers and headset that make virtual objects feel more convincing.
The VR headset’s eye tracking also enables foveated rendering, a technology that focuses only on where the fovea of the eye is looking to maximize resolution, getting more graphics punch with fewer pixels. (Dominic Mallinson, Sony’s PlayStation head of R&D, suggested eye tracking could be likely back in a 2019 conversation with CNET.)
PSVR 2 can scan your room, live-broadcast VR gaming
Passthrough cameras on the headset work like cameras on the Quest 2 and other VR headsets, showing the real world in your headset. The headset will also «mesh» your physical space, scanning walls, floors and obstacles like chairs and desks to get a clear sense of play space. It can create a boundary you can play in.
The meshing part is particularly interesting, because it’s something AR headsets and mixed-reality headsets do. It means the PSVR 2 could, theoretically, also have some mixed reality experiences like the Quest 2 is already playing with, although Sony hasn’t announced anything on that front yet.
One unique feature is a live broadcast mode, which will use the PS5’s TV-mounted camera to record yourself overlaid with footage from your live gameplay into a single stream. Mixed reality livecasting tools have been emerging for Quest 2, but no game console has ever had this feature before.
There’s a cinematic mode plus a VR mode
Sony also details two display modes for the headset: one, for VR, will display at 2,000×2,040 pixels per eye in HDR, at 90Hz or 120Hz. A 2D «cinematic mode,» much like what the original PSVR can do, plays movies and 2D games at 1,920×1,080 resolution in HDR at either 24Hz, 60Hz or 120Hz.
Playing 2D games on the PSVR 2 does feel better than you’d think, but movies and TV shows don’t look as good as a large, nice TV (to our eyes). However, this means you could use the PSVR 2 as a self-contained gaming display for the PS5 to play games on while the TV is off or playing something else (or if there’s no TV at all, provided you already set up the PSVR 2 with a TV the first time).


The PSVR 2 has a cable. You have to learn to live with it.
Scott Stein/CNETHeadset specs
- OLED displays, with 2,000×2,040-pixel resolution per eye, 90Hz and 120Hz frame rates
- 110-degree field of view
- Eye tracking and foveated rendering
- Adjustable lens separation
- In-headset vibration
- 3D audio
- Built-in microphone and audio-out headset jack
- Four external cameras for tracking
- Single USB-C connection
- Sense controllers with USB-C ports, Bluetooth 5.1, rechargeable batteries, 6DoF tracking, finger tracking using capacitive touch buttons and infrared, haptics and specialized haptic triggers like the DualSense controller


Eye tracking comes built in, via infrared cameras around the lenses.
Josh Goldman/CNETHeadset design: Vibrations, eye tracking, moving lenses
Even if Sony’s PSVR 2 headset looks bulky in the photos, it’s actually a lot more comfortable than the Quest 2. An adjustable headband, similar to the PSVR’s original design, means it’ll tighten around the head like a visor instead of using an elastic strap to squeeze your face. The headset can move closer to your face, and lens distance can be adjusted for different IPD levels (interpupillary distance, or the space between eyes). The headset also works really well for my glasses, fitting over my wide frames easily with soft rubberized sides to block out light, and the hardware feels comfortable over longer game sessions.
The headset supports headphones with a standard headphone jack, and has one cable that tethers to the PS5 via USB-C, via a jack that seems to come out of one side of the headband. That’s a lot fewer wires than the breakout box needed for the original PSVR. Included earbuds are fine, but there aren’t any ambient speakers like the Quest 2 has. You can connect Sony’s wireless Pulse headset, too, which sounds better.
Built-in eye tracking promises to deliver better graphics, and possibly allow eye control and eye contact in VR games. Eye tracking isn’t common in consumer VR headsets yet, but the technology should be arriving on other mainstream headsets, and possibly Apple’s as well. It worked well enough with my glasses.
The headset’s four tracking cameras allow movement in VR to be tracked without using a TV-connected camera bar. The tracking should work in a similar way to other VR headsets. It’s possible that the cameras could allow some pass-through mixed reality, too, blending VR with what the cameras see onto the headset’s display.
Some people have reported that the headset has a limited «sweet spot» to make things look good with their eyes, and I’ve seen it take some adjustment to get my eye/head fit just right. However, the headset still feels better over my glasses than other VR headsets with the exception of the Quest Pro.


A look at the hardware from above.
Scott Stein/CNETHow long is the cable?
The USB-C cable attached to the PSVR 2 headset is about 15 feet long, long enough for us to comfortably move around a roughly 7-by-7-foot play space, which is about what Sony recommends for full-motion VR gaming. There are ways to play standing or sitting down, too, but much like other tethered VR headsets, that heavy cable can sometimes get tangled under your feet or around your legs.
Is the PSVR 2 eyeglasses-friendly?
Yes, very. I wear chunky glasses, and the wide headset fits over my glasses just fine. Your mileage may vary, but it felt like the best over-glasses fit of any VR headset around. Unlike the Quest 2 (which doesn’t accommodate all glasses sizes), the Quest Pro (which fits over wide glasses but can can be a bit stiff to take on and off), and some headsets like the Vive XR Elite that don’t work with glasses at all and use prescription adjustment instead, the easy-to-adjust visor design here was welcome.


Horizon Call of the Mountain is one of the most eye-catching launch games, but its climbing and bow-and-arrow action might not be for everyone.
Sony Interactive EntertainmentLaunch games: Lots of options
Sony’s own exclusive, Horizon Call of the Mountain, remains the PSVR 2’s splashiest game, but other games have been announced as well. No Man’s Sky, The Walking Dead: Saints & Sinners — Chapter 2: Retribution, Resident Evil Village, Star Wars: Tales from the Galaxy’s Edge, Demeo and Gran Turismo 7 are some of the early standouts.
For more, check out CNET’s favorite PSVR 2 games so far.
However, some bad news for original PSVR owners: Sony confirmed that original PSVR games aren’t necessarily PSVR 2-compatible unless the games are specifically updated.
The games that are already here, or are announced already as coming in the future:
- Horizon Call of the Mountain
- Gran Turismo 7
- Resident Evil Village
- Puzzling Places
- What the Bat?
- Demeo
- Star Wars: Tales From the Galaxy’s Edge Enhanced Edition
- Moss and Moss Book II
- Firewall Ultra
- Creed: Rise to Glory
- Beat Saber
- Ghostbusters: Rise of the Ghost Lord
- Among Us VR
- Vacation Simulator
- Job Simulator
- The Dark Pictures: Switchback VR
- Pavlov
- Fantavision 202x
- Kayak VR: Mirage
- Rez Infinite
- Synth Riders: Remastered Edition
- The Last Clockwinder
- Tetris Effect Connected
- Townsmen VR
- Thumper
- Crossfire: Sierra Squad
- The Light Brigade
- Cities VR
- Cosmonius High
- Hello Neighbor: Search and Rescue
- Jurassic World Aftermath Collection
- Pistol Whip
- Zenith: The Last City
- After The Fall
- Tentacular
- NFL Pro Era
- No Man’s Sky
- Before Your Eyes
- Song in the Smoke: Rekindled
- The Tale of Onogoro
- Kizuna AI: Touch the Beat
- Dyschronia: Chronos Ultimate
- Altair Breaker
- 2MD VR Football
PSVR 2 FAQs
Are there any bundled discounts?
The price of the PSVR 2 and PS5 together is over $1,000, and that’s not including games. We don’t know yet if Sony will package these together into a more affordable set, but anything would help. Sony is bundling Horizon Call of the Mountain with the PSVR 2, but the added $50 cost doesn’t really mean a discount per se.
What exclusive games will it have in the future?
There are a ton of launch games already coming, but many of these games are ports of existing VR hits. Sony has a few exclusives (Horizon and Gran Turismo, notably). We’ll see how many more exclusives, or updates to older exclusive PSVR games, end up emerging.
Is it backward-compatible with all the old PSVR games?
No, at least not without an update. Sony confirmed that older games will not be automatically compatible. Some older games are getting PSVR 2 updates, which are either free or for an added cost. Hopefully this trend continues, because there are hundreds of still-good games that even work on the PS5 with older PSVR hardware that will otherwise be stranded.
Is there any chance it could be wireless in the future?
Not right now. This PSVR 2 headset is tethered with a USB-C cable, and doesn’t have its own battery. It’s hard to imagine a 360-degree Beat Saber with that USB-C cable attached, but PC VR headsets are cable-tethered, too.
Technologies
Mars Just Got Closer: How NASA’s SR-1 Freedom Could Rewrite Space Travel
The spacecraft will deliver NASA’s Skyfall payload, which is a group of helicopters designed to find subsurface water on Mars.
NASA is sending a nuclear-powered spacecraft to Mars. Alongside Tuesday’s announcement of its new Ignition program, which features a planned Moon base and a successor to the International Space Station, the agency revealed the SR-1 Freedom, set to launch in 2028 as the first nuclear-powered craft to leave low Earth orbit.
First, SR-1 Freedom will act as a tech demonstration to show that a nuclear-powered spacecraft is a viable option. If it works, it opens the possibility of deeper space exploration by addressing the range limitations imposed by solar power and liquid fuel.
SR-1 Freedom is also responsible for delivering the Skyfall payload to Mars. Skyfall is a team of helicopters that will scour Mars with sensors to find subsurface ice. It’s no secret that Mars harbors water far beneath the surface, but NASA aims to find a large enough pocket of ice near the surface to help sustain human life in future missions.
It’s not NASA’s first attempt to solve nuclear space travel. The agency has spent $20 billion over more than a dozen failed attempts with only one nuclear reactor to show for it, which is the SNAP-10A that launched into low Earth orbit in 1965. It operated for 43 days before a high-voltage failure shut it down. The SNAP-10a remains in polar orbit to this day.
What we know about the SR-1 Freedom
Steve Sinacore, NASA Fission Surface Power program director, told reporters in a news conference that NASA would select a launch vehicle from the available stock and that there would be regulatory and inspection proceedings with the Interagency Nuclear Safety Review Board before any selection.
NASA plans to begin developing hardware for the SR-1 Freedom once the design is finalized. This is expected to take roughly 18 months, with assembly beginning in January 2028. Reactor fueling, texting and assembly will continue until the SR-1 Freedom arrives at its launch site in October 2028.
SR-1 Freedom is targeting a December 2028 launch, as it’s the next available Mars launch window after the one opening in late 2026. The nuclear-electric engine is expected to produce over 20 Kilowatt-electric units and will be integrated with existing spacecraft technology to make the launch timing more realistic.
The reactor will be powered by high-assay, low-enriched uranium dioxide fuel and will transfer its heat via heat pipes, protected by a boron carbide radiation shield. The heat is converted to power using the Advanced Closed Brayton Cycle Power Conversion System, which then powers the electric propulsion system at the other end of the spacecraft.
Excess heat is handled with a massive heat sink made of composite materials and titanium. The spacecraft’s brain is located between the heat sink and the propulsion system, and will send data back to Earth.
Why nuclear power?
«Nuclear power in space doesn’t just enhance space exploration, it enables it,» said Sinacore during the press conference. «Through increased energy density, nuclear power will keep lunar bases operating through the 14-day, 354-hour night.»
One of the big problems with deep space exploration and long-term space exploration is power, and NASA hopes nuclear power can solve it. Solar power is a challenge on the moon due to its two-week-long night cycle. According to Sinacore, you would need «football fields of solar panels» to power a long-term base on Mars.
The next planet out is Jupiter, where solar panel efficiency drops to 4% when compared to Earth. Once you get beyond Jupiter, solar energy is negligible, making it a poor choice for deep space missions.
NASA currently uses liquid propellant for space flight. That doesn’t work for long-term missions and flights due to its mass fraction, which is fancy math that basically says it’s too heavy for long-term spaceflight. The spacecraft wouldn’t be able to move people and cargo efficiently. Sinacore says these are «physics constraints» rather than engineering problems, and that nuclear power solves them.
What comes after SR-1 Freedom
SR-1 Freedom marks the beginning of many more projects coming over the next couple of decades. Should the SR-1 Freedom prove successful, the next project would be the Lunar Reactor-1, a nuclear reactor that would serve as the power source for NASA’s upcoming moon base.
Having a nuclear-powered spacecraft and base on the books would open the door for more of both, including a potential human mission to Mars, bigger and more powerful nuclear reactors, and potential commercial participation from companies wanting to get in on the action.
Technologies
We’re All Flailing With AI: I Tried Art That Pokes Back at the Chaos
A handful of moments at SXSW had me wondering: How much of AI is me playing a game and how much is it a game playing me?
Smack dab in the middle of this year’s SXSW festival in Austin, Texas, there was a huge dirt hole in the ground, blocks wide, where there used to be a convention center. The festival’s events continued around it in hotels, but the building’s absence was like a lurking symbol. Of chaos, of disruption. Of the world in 2026, dealing with AI and everything else.
I have no idea what the rest of 2026 will bring, but the vibe I felt at a vibe-filled show made me question how AI can work with our lives, our art and our existence. Instead of fighting it, the conference awkwardly embraced it and challenged it. I saw pockets of work all over the place and wondered about it. Conversations. And how to escape it.
Everyone’s trying to handle a world that’s suddenly way too overloaded with AI, generating documents, images, deepfakes and music, injecting assistant agents into our operating systems, even launching entire unleashed and interconnected agent systems all talking to each other on their own social networks. Job-threatening, constantly shifting, training on our data and aiming for our faces. Do we run from it, try to destroy it, or use art to question and challenge it?
SXSW gave me a lot of the latter, in different slices.
In my panel I was in at SXSW with Meow Wolf’s Vince Kadlubek and Niantic Spatial’s Dennis Hwang about their experiments overlaying tech into art in physical installations, Kadlubek discussed how AI’s infinite slop creative tool becomes uninteresting over time, while intentional art counteracts that. And that’s exactly how I felt moving through intentionally-made experiences that turned my thoughts about AI inside out, all in different ways.
AI seeping into our gaming chats, for better and worse
In a VR headset in a hotel ballroom, I chatted with cartoon fantasy characters in a whimsical game called Fabula Rasa: Dead Man Talking, made by game studio Arvore. I could make any request or beg as much as I wanted from my cage, where I was held prisoner for offending the King and kept dangling over a monster’s mouth for execution. Could I plead my case to them? The cartoonish VR characters responded, but via generative AI improvising off a script from a writing team, using Claude.
The chats were fun, ridiculous. I made myself an irresponsible magician and leaned into improv with the characters who approached me. None of them disappointed, which is a surprise for dialogue that’s somewhat AI-generated. Most interactions felt frazzled and absurd, but it worked for the style and the humor of it all. There was a bit of a delay for responses to kick in, though, standard-issue for a lot of AI conversations.
This was the best use of AI I saw. But what could it mean for future games, like RPGs? It’s an unsettling thought if you’re a writer…or, exciting. Indie games could end up finding ways to branch out responsive dialogue in ways that still feel custom-written and crafted. I don’t know.
On the less successful end was Love Bird, an interactive game show experience directed by Cameron Kostopoulos. I was wowed by the initial onboarding, where the «producers» called me on my phone to interview me. The producer was actually an AI chatbot with a surprisingly rapid response time. I convinced the AI to be a participant, and then was led into a room where I spoke via Xbox controller and headset microphone with a PC game on a monitor, where I was competing with others while carnivorous bird-people threatened to eat us. I’m not sure why, exactly. And I don’t know how it all ended, because my chats with the host and participants fell into broken loops that made us have to quit out early.
Love Bird was fast-paced and responsive, but also too chaotic and weird, even for someone like me who likes weird. It didn’t feel like it was really paying attention to me, and I didn’t feel like I had space to process. Maybe that’s by chaotic design, but after emerging, it just made me want to feel less AI-spammed and have games that didn’t flood me with as much conversation as this one did. I needed a quiet space. My favorite immersive experiences are often the quiet ones, not the chatty ones.
AI as a personal transformational lens
In one room, I stood at a podium and read a portion of New York Mayor Zohran Mamdani’s acceptance speech from November as, before me, video clips of crowds cheering played on a large video monitor, seemingly reacting to me. A few minutes later, I heard my voice delivering more of Mamdani’s speech, AI-generated in my voice, to film clips of inspirational moments of support. I saw my own face layered into the background of some of these clips, too.
The Great Dictator, directed by Gabo Arora, is a museum-style participatory exploration of the power of rhetoric, provocatively named for the Charlie Chaplin satire about Adolf Hitler. The three speeches you can choose from — Mamdani’s, President Ronald Reagan’s on taking down the Berlin Wall, and Malcolm X’s The Ballot or the Bullet speech — are all picked to represent powerful moments in history, and the exhibit is about embodying history and feeling the power of speech and rhetoric in a personal way — and relating to it from a new, personal, and maybe more empathetic angle. The voice AI was generated by ElevenLabs, and the video clips at the end were hand edited, but with AI overlays of my face handled by Runway. What surprised me was how much I ended up being in historical documents. Is this a deepfake? Is it embodiment? Is it both?
Another art experience embedded me into the work: Spectacular, by Jonathan Yeo. Yeo is an artist from London whose portrait work includes King Charles III, President George W. Bush and designer Jony Ive of Apple renown and has played with tech in many of his installations. This gallery at SXSW, replicated from an exhibit that was in Paris before, used Snap Spectacles AR glasses to melt the real portraits with augmented effects and voice narration from Yeo. And, later on, the portraits began overlaying my own face, transformed in art styles that matched Yeo’s using generative AI trained on his work. At the end, I got a printout of my portrait, «signed» by Yeo himself.
I spoke with Yeo in Austin after experiencing his work. He admitted that AI is a provocation here, but that he wants to own the process that AI is trying to take from our own data everywhere. And he’s trying to apply AI and AR in ways that feel intentional and subtle as ways to help play with and bring the art to life, in museums and elsewhere. But again, like with The Great Dictator, I wondered: How much will «permanent» documents of art and history begin to melt over time with AI? What will be kept intact, and who will enforce the line?
AI as broken manipulator
Wearing a pair of Meta Oakley smart glasses, I stood in a room full of objects on shelves as a voice directed me to open a drawer, find a dollar bill there and put it in a shredder filled with bill fragments. I did it. The AI remarked with pleasant surprise at how compliant I was. From there, I «competed» tasks to prove my value as human labor, graded by an AI that saw my actions through the glasses camera and showed my stats on a TV screen, along with a deepfaked dancing version of myself.
Body Proxy, by Tender Claws, applies Meta’s glasses camera feed into its own art AI app on a phone to explore how AI could make us proxies for physical labor. It’s weird and satirical like some of their other VR work (the game Virtual Virtual Reality, among others), but also pushes at a much bigger question: How much is AI breaking us or manipulating us? How much are we willing to be manipulated?
Escape The Internet (Part One), an interactive game I played in a movie theater at the Alamo Drafthouse, turned similar ideas of manipulation into a social experiment. Created by Lucas Rizzotto, another VR/AR provocateur artist, it involved no headsets or glasses. Instead, everyone in the theater used their own phones to connect to a private server that «ran» the game and gave us little personal avatars, feeding us surveys to collect our personal tendencies and then having us play social voting games to see how we’d polarize on decisions like, for instance, who to kill: one person who shared our political views, or five who didn’t?
It’s all absurd and funny and guided by Rizzotto’s in-person guidance at the front of the theater, and along the way, I thought about how social platforms manipulate us with algorithms. Here, in this room together, we’re encouraged to find each other, recognize each other and love each other. The experience has branching paths and can be replayed, and could re-emerge in future conferences and events. But, again, I asked myself: How much of AI is a game that’s playing me, instead of me playing it?
Design for AI is still unfinished (or nonexistent)
In some of the panels I sat in on, and in conversations I had, I got a creeping sense that AI is moving too fast for artists or ethicists — or anyone else, really — to stop and properly process. One panel exploring The Future Design Language of Robots, with Olivia Vagelos of the Design for Feelings Studio, and Savannah Kunovsky, managing director of Ideo’s emerging technology division, tapped into the assumptions we make about robots. I teamed up with someone next to me to try to dream up ideas to break my assumptions and think freshly about what robots could be.
Kunovsky and Vagelos both agreed that designing for AI presents similar challenges right now, particularly because the tech is moving too fast for design to properly attend to it. But sadly, my attempt to record what they said as a quote was sabotaged by my AI-enabled Meta Ray-Ban glasses, which activated as the microphone when I tried recording a voice memo from the panel on my phone, muting the audio completely because of noise cancellation. Wearables are still broken, too.
Another panel, called Generative Ghosts: AI Afterlives and the Future of Memory, led in part by two Google DeepMind researchers, discussed many fascinating angles on how we can responsibly handle archiving our lives via AI as memories in the future, and who controls that ability. The panel had no specific answers but plenty of questions. And, as my own attempt at recording it was also erased by my activated smart glasses, it gave me an additional level of absurd friction which made me wonder: Will these archived memories eventually be lost, too, from big tech companies that sunset services or introduce noncompatible formats, memory-holing the memories?
AI is threatening, but often not successful in fulfilling its promises (or threats). Self-driving Waymo cars flooded Austin during SXSW, with my Uber app often pushing them on me instead of human drivers. I gave in and took a few for amusement, but they usually took longer to get where I was going. And, one unfortunate evening, my Waymo took a weird roundabout route that ended up dropping me off a half mile from my destination on the wrong side of the highway.
My favorite SXSW memory was making an old-fashioned collage out of magazine clippings with friends at an art gallery over wine, something that involved no tech at all. We worked our magic with intuition, scissors, old magazines and good conversation. Was it perfect? No. But it cost a lot less than generative AI. Which also makes me wonder if all these AI tools being offered to enhance or supplant creativity are necessary, or whether we’ll just rediscover that we had more tools than we realized all along.
Technologies
Samsung’s Galaxy A37, A57 New Pricing Tests the Limits of a Plastic Phone
While market conditions are raising the cost of these Galaxy A phones, Samsung’s hopes fast charging speeds, improved water resistance and camera features will provide value for price-conscious buyers.
Samsung’s announcement of the new $450 Galaxy A37 and $550 Galaxy A57 today brings good news and bad news for value-conscious customers looking for a cheaper phone.
Much like we’ve seen on the flagship-level Galaxy S26, both phones are priced higher than the A36 and A56 they are replacing — in this case by $50 — though storage options for both phones still start at 128GB. However, both phones did get a design improvement that features IP68 water resistance and will feature the newly updated Circle to Search, with enhancements like Find the Look for identifying outfits.
Starting with the $450 Galaxy A37, this phone has a 6.7-inch display with a 120Hz refresh rate. It runs on Samsung’s Exynos 1480 processor and has a 45-watt wired charging speed, which Samsung says will recharge its 5,000-mAh battery from 0% to 65% in 30 minutes.
The phone is made from plastic and comes in four colors: charcoal, gray-green, white and — my favorite — lavender. (Note: Samsung adds the word «Awesome» in front of all of these color names, but I’m going to save us from this.) The A37 also comes in a 256GB model that costs $540.
The A37’s cameras include a 50-megapixel wide, an 8-megapixel ultrawide and a 5-megapixel macro on the back, along with a 12-megapixel selfie camera on the front. The A37 gets a sampling of Galaxy AI features, including object eraser for editing photos, language translation and an upgraded Bixby assistant.
The $550 Galaxy A57 moves up from plastic to a metal body but only comes in navy. It also has a 6.7-inch display, but weighs in at 179 grams, which is markedly lighter than the A56’s 198g. During my hands-on time, it was noticeably light, especially for a phone with the larger display size.
The phone runs on Samsung’s Exynos 1680 processor. It also gets a few more AI photo editing tools like Best Face for fixing group photos where someone is blinking.
The cameras on the A57 include a 50-megapixel wide, a 12-megapixel ultrawide, and a 5-megapixel macro on the back and, like the A37, includes a 12-megapixel selfie camera. A step-up 256GB model costs $610, but it’s worth noting that this price is really close to the $650 Galaxy S25 FE, which includes all of the Galaxy AI features along with a telephoto camera.
I’m bummed but not surprised to see the increased cost of the A37 and A57 versus last year’s models, which a Samsung representative said is attributable to current market conditions when I asked about the ongoing RAM shortage.
During my hands-on time, though, I did find both phones to look quite nice, with the lavender model likely providing plenty of competition to the $499 Google Pixel 10A’s colors. Both phones will go on sale on April 9.
-
Technologies3 года agoTech Companies Need to Be Held Accountable for Security, Experts Say
-
Technologies3 года agoBest Handheld Game Console in 2023
-
Technologies3 года agoTighten Up Your VR Game With the Best Head Straps for Quest 2
-
Technologies4 года agoBlack Friday 2021: The best deals on TVs, headphones, kitchenware, and more
-
Technologies5 лет agoGoogle to require vaccinations as Silicon Valley rethinks return-to-office policies
-
Technologies5 лет agoVerum, Wickr and Threema: next generation secured messengers
-
Technologies4 года agoOlivia Harlan Dekker for Verum Messenger
-
Technologies4 года agoThe number of Сrypto Bank customers increased by 10% in five days
