Connect with us

Technologies

Pixel 7 Pro Actually Challenges My $10,000 DSLR Camera Gear

My full-frame Canon camera is better, but Google’s flagship phone opens creative options far beyond snapshots.

Google got my attention by bragging about the Pixel 7 Pro‘s «pro-level zoom» and asserting that the Android phone’s photography features can challenge traditional cameras. I’m one of those serious photographers who hauls around a heavy camera and a bunch of bulky lenses. But I also love phone photography, so I decided to test Google’s claims.

At its October launch event, Google touted the Pixel 7 Pro’s telephoto zoom for magnifying distant subjects, its Tensor G2-powered AI processing, its faster Night Sight for low-light scenes and a new macro ability for closeup photos. «It cleverly combines state-of-the-art hardware, software and machine learning to create amazing zoom photos across any magnification,» Pixel camera hardware chief Alexander Schiffhauer said at the phone’s launch event. Google wants you to think of this phone as offering a continuous zoom range from ultrawide angle to supertelephoto.

As you might imagine, I got better results from my «real» camera equipment, which would cost $10,000 if purchased new today. Even though my Canon 5D Mark IV is now 6 years old, it’s hard to beat a big image sensor and big lenses when it comes to color, sharpness, detail and a wide dynamic range spanning bright and dark tones.

But the Pixel 7 Pro’s photographic flexibility challenges my camera setup better than any other phone I’ve used, even outperforming my DSLR in some circumstances and earning a «stellar» rating from CNET editor Andrew Lanxon. While my camera and four lenses fill a whole backpack, Google’s smartphone fits in my pocket. And of course that $900 smartphone lets me share a selfie, check my email, pay for the groceries and tackle the daily crossword puzzle.

With the steady annual improvement in smartphone camera hardware and image processing, a smartphone isn’t just a better-than-nothing camera. These little slices of electronics are increasingly able to nail important shots and open up new creative possibilities for those who are discovering the rewards of photography.

I’ll keep hauling my DSLR on hikes and family outings. But because I won’t always have it with me, the Pixel 7 Pro — in particular its zoom and low-light abilities — means I won’t be as worried about missing the shot when I don’t.

My Canon 5D Mark IV, which costs $2,700 new these days, most often has the $1,900 Canon EF 24-70mm f/2.8L II USM lens mounted. I also use the $2,400 EF 100-400mm f/4.5-5.6L IS II USM for telephoto shots, the $1,300 ultrawide EF 16-35mm f/4L IS USM zoom, the $1,300 EF 100mm f/2.8L Macro IS USM for closeups, and the $429 Extender EF 1.4X III for more telephoto reach when photographing birds. Here’s how that gear stacks up against the Pixel 7 Pro’s 0.5x ultrawide, 1x main camera and 5x telephoto camera.

Google Pixel 7 Pro vs. Canon 5D Mark IV, main camera

With plenty of light, the Pixel 7 Pro’s 24mm main camera does a good job capturing color and detail in its 12-megapixel images. Check the comparisons here (and note that my DSLR shoots in a more elongated 3:2 aspect ratio than the Pixel 7 Pro’s 4:3).

Pixel peeping shows the phone can’t hold a candle to my 30-megapixel DSLR when it comes to detail. If you’re printing posters or need a lot of detail for photo editing, a modern DSLR or mirrorless camera is worth it. But 12 megapixels is plenty for most purposes. Check the below cropped images to see what’s going on up close.

Google missed a chance to shoot even higher resolution photos than my 30-megapixel DSLR, though. The Pixel 7 Pro’s main camera has a 50-megapixel sensor. It takes 12-megapixel photos using an approach called pixel binning that combines each 2×2 pixel group on the sensor into one effectively larger pixel. That means better color and low-light performance when shooting at 24mm. But you can use those 50 megapixels differently by skipping the pixel binning and shooting in the sensor’s full resolution when there’s sufficient light. That’s exactly what Apple does with the iPhone 14 Pro camera, and I wish Google did the same.

Pixel 7 Pro vs. DSLR, people and pets

The Pixel 7 Pro was capable at portrait photography. I prefer shooting raw and editing the shots myself because I sometimes find the Pixel 7 Pro makes faces look a little too processed, and I find its color balance a bit cool for my tastes. With the main camera, the Pixel 7 Pro does a pretty good job finding faces, tracking them and staying focused. For 2022, the Pixel 7 Pro now can find individual eyes, the ideal focus point of a camera and a weak point on my older DSLR.

On this comparison, I find the DSLR did a better job with skin tones, but the Pixel 7 Pro capably exposed the face in tricky lighting.

Using the Pixel 7 Pro’s portrait mode, which artificially blurs photo backgrounds, I find the processing artifacts distracting, especially with flyaway hair, though that’s not a problem with the example below. The shot is workable for quick sharing and looks fine on smaller screens, but I wouldn’t make a print of it. For the DSLR shot, I used my Sigma 35mm f1.4 lens, shooting wide open at f1.4 for the smoothest possible background blur. It’s much better than the Pixel 7 Pro, though its shallow depth of field blurs the hands and plastic toys.

For pets, the Pixel 7 Pro again did a great job finding and focusing on eyes. Here’s my dog, up close. The main camera at 1x zoom, or 24mm, isn’t ideal for single subjects, though, and the camera’s performance at 2x isn’t as strong, so bear that in mind.

To see how much more detail my SLR can capture — as long as I get focus right — check the cropped views below. And note that new mirrorless cameras from Sony, Nikon and Canon do a good job with eye tracking for easier focus.

DSLR vs. Pixel 7 Pro, telephoto cameras

Telephoto lenses magnify more distant subjects, and the Pixel 7 Pro has a remarkable range for a smartphone. Its sensors can shoot at 2x, 5x and 10x zoom modes with minimal processing trickery. It’ll shoot at intermediate settings with various combinations of cropping and multi-camera image compositing that I find fairly convincing. Then it reaches up to 30x with Google’s AI-infused upscaling technology, called Super Res Zoom. Here’s the same scene shot across the Pixel 7 Pro’s full range from supertelephoto 30x to ultrawide 0.5x:

The image quality is pretty bad by the time you reach 30x zoom, an equivalent of 720mm. But even my expensive DSLR gear only reaches 560mm maximum, and venturing beyond 10x on the Pixel 7 Pro can be justified in many circumstances. Not every photo has to be good enough quality to make an 8×10 print.

Bigger telephoto photography

Telephoto lenses are big, which is why those pro photographers at NFL games haul around monopods to support their hulking optics. Canon’s RF 400mm f/2.8 L IS USM lens, popular on the sidelines, weighs more than six pounds, measures more than 14 inches long, and costs more than my entire collection of cameras and lenses. My Canon 100-400mm zoom is smaller and cheaper but doesn’t let in as much light, but it’s still gargantuan compared with the Pixel 7 Pro. I’m delighted to be able to capture useful telephoto shots on a Pixel phone, an option that previously was available only on rival Android phones from Samsung and others.

Google exploits the Pixel 7 Pro’s 50-megapixel main camera sensor for the first step up the telephoto lens ladder, a 2x zoom level good for portraits. The Pixel 7 Pro uses just the central 12 megapixels to capture a 12-megapixel photo in 2x telephoto mode, an equivalent focal length of 48mm.

The dedicated telephoto camera kicks in at 5x zoom, an equivalent of 120mm. Instead of a bulky telephoto protuberance, Google uses a prism to bend light 90 degrees so the necessary lens length and 48-megapixel image sensor can be tucked sideways within the Pixel 7 Pro’s thicker «camera bar» section. It also can use the central megapixels in its 10x mode, or 240mm, an option I think is terrific. This San Francisco architectural sight below is pretty good:

Using AI and software processing to zoom further, the camera can reach 20x and even 30x zoom, which translates to 480mm and 720mm. By comparison, my DSLR reaches 560mm with my 1.4x telephoto extender.

My DSLR would have trounced the Pixel 7 Pro for this scene of Bay Area fog lapping up against the Santa Cruz Mountains south of San Francisco, shot somewhere between 15x and 20x. (I wish Google would write zoom level metadata into photos the way my Canon records lens focal length settings.) But guess what? I was mountain biking and didn’t take my DSLR. The best camera is the one you have, as the saying goes.

Back at 10x zoom, I was pleased with this shot below of my pal Joe mountain biking. I’ve photographed people in this very spot before with smartphones, and this was the first time I wasn’t frustrated with the results.

Google’s optics and image processing methods are clever but not magical. The Pixel 7 Pro produces a 12-megapixel image, but the farther beyond 10x you shoot, the more you’ll cringe at its blotchy details that look more like a watercolor painting. That’s the glass-is-half-empty view. I’m actually on the glass-is-half-full side, appreciating what you can do and recognizing that a lot of photos will be viewed on smaller screens. Image qualityof 10x is respectable, and that alone is a major achievement.

Here’s a comparison of a rooftop party photographed with the Pixel 7 Pro at 30x, or 720mm equivalent, and my camera at 560mm, but cropped in to match the phone’s framing. The DSLR does better, of course. Even cropped, it’s an 18-megapixel image.

Practical limits on Pixel 7 Pro’s telephoto cameras

To really exercise the phone, I toted it to see the US Navy’s Blue Angels flight display over San Francisco. Buildings and fog blocking my view made photography tough, but I found new limitations to the Pixel 7 Pro.

Fiddling with screen controls to hit 10x or more zoom is slow. Framing fast-moving subjects on a smartphone screen is hard, even with the aid of the miniature wider-angle view that Google pops into the scene and its AI-assisted stabilization technology. Focus is also relatively pokey. With my DSLR, I could rapidly find the jets in the sky, lock focus, track them as they flew and shoot a burst of shots.

I didn’t get a single good photo of the Blue Angels with the Pixel 7 Pro. Google’s «pro-level zoom» works much better with stationary subjects.

DSLR vs. Pixel 7 Pro, shooting in the dark

Here’s where the Pixel 7 Pro beats out a vastly more expensive camera. There’s no way you can hold a camera steady for 6 seconds, but Pixel phones in effect can thanks to computational photography techniques that Google pioneered. Google takes a collection of photos, using AI to judge when your hands are most still, then combines these individual frames into one shot. It’s the basis of its Night Sight feature, which I’ve used many times and, at its extreme, powers an astrophotography mode I’ve used to take 4-minute exposures of the night sky.

Below is a comparison of a nighttime scene with the Pixel 7 Pro at 1x, where it’s best at gathering light, and my DSLR with its 24-70mm f2.8 lens. The DSLR has more detail up close, but the Pixel 7 Pro does well, and its deeper depth of field means the leaves in the foreground aren’t a smeary mess.

Here’s a comparison of a 2x zoom photo with the Pixel 7 Pro and the best I could do handheld with my 24-70mm f2.8 lens. The longer your zoom, the harder it is to hold a camera steady, and even with my elbows on a railing to steady the camera, the Pixel 7 Pro shot was vastly easier to capture. I had to crank my DSLR’s sensitivity to ISO 12,800 to get the shutter speed down to 1/8sec, and even then, most of the photos were duds. Image stabilization helps, but this lens doesn’t have it.

Just for kicks, I used a tripod to take three exposure-bracketed shots with my DSLR and merged them into a single HDR (high dynamic range) photo in Adobe’s Lightroom software. The longest exposure was 30 seconds. That’s how much effort it took to beat a Night Sight photo I took just standing there holding the phone for 6 seconds. Check the comparison below.

Parked cars

Here’s where my DSLR completely trounced the Pixel 7 Pro, even with Night Sight, though: the nearly full moon. Here’s the Pixel 7 Pro at 30x zoom vs. my DSLR at 560mm, cropped so the framing matches.

DSLR vs. Pixel 7 Pro, dynamic range

One of the best measures of a camera is dynamic range, the span between dark and light it can capture in a single scene. To exercise the Pixel 7 Pro here, I shot in raw format, which allows for more editing flexibility. Then I edited the photos, cranking the exposure up 4 stops to reveal noise problems in shadowed areas and then down 4 stops to see how well it captured detail in bright areas.

In short, I’m impressed. Google squeezes a remarkable amount of data out of its relatively small sensor with its processing methods.

Two techniques are relevant. With Google’s HDR+ system, the Pixel 7 Pro combines multiple underexposed frames and one regularly exposed frame to record shadow detail without blowing out highlights in bright areas. And Google includes this data in a «computational raw» format that packages that detail in Adobe’s very flexible DNG format. It’s not truly raw, like the single frame of data pulled from my DSLR’s image sensor is, but it’s an excellent option for smartphone photography.

Below is a cropped photo with the Pixel 7 Pro’s 1x camera, underexposed by 4 stops to see if was able to record a range of tones even in the very bright pampas grass plumes. It was.

Shooting at 2x, which uses only the central pixels on the 1x camera, poses more of a challenge when going up against my DSLR, which suffers no such degradation in hardware abilities when I zoom in. Overexposed by 4 stops, you can see a lot more noise and color problems with the Pixel 7 Pro in the comparison below. But overall, it’s got impressive dynamic range on the main camera.

DSLR vs. Pixel 7 Pro, ultrawide

Google made the ultrawide lens on the Pixel 7 Pro an even wider field of view compared with last year. What you like is a matter of personal preference, but I appreciate the dramatic perspective that you can capture with a very wide angle. When I don’t need it, the 24mm main camera still qualifies as wide angle.

Here’s a comparison of a scene shot with the Pixel 7 Pro and my DSLR’s 16-35mm ultrawide zoom.

DSLR vs. PIxel 7 Pro, macro

The new ultrawide camera now has autofocus hardware, and that opens up the world of macro photography for close-up subjects. Apple’s iPhone Pro models got this ability in 2021, and I’ve loved macro photos for years as a way to shoot flowers, mushrooms, toys and other small subjects, so I’m delighted to see it on the higher-end Pixel phones.

As with the iPhone, though, the macro is useful as long as the subject fits in the central portion of the frame. Note in this comparison below how blurred the image gets toward the periphery of this butterfly coaster with the Pixel 7 Pro.

No, it’s not as good as my DSLR. But with macro abilities, Night Sight and a zoom range from ultrawide to super telephoto, the Pixel 7 Pro is more than just useful for snapshots. It lets you start exploring a much bigger part of photography’s creative realm.

Technologies

Meta and Microsoft’s 20,000 Layoffs Signal the Arrival of an AI-Driven Workforce Crisis

Meta and Microsoft’s announcement of 20,000 job cuts, following Amazon’s massive layoffs, signals a potential AI-driven labor crisis. Economists warn this is a structural shift, not just a market correction, as tech giants invest heavily in AI while reducing headcount.

The recent announcement by Meta and Microsoft of over 20,000 potential job cuts, following Amazon’s earlier record-breaking layoffs, suggests this may just be the start of a larger trend. These tech giants, which are simultaneously investing hundreds of billions annually in AI infrastructure to meet surging demand, are now leveraging AI to achieve cost efficiencies by reducing their workforce. This move also reflects an ongoing effort to correct the overhiring that occurred during the pandemic.
Many economists and industry experts worry that a labor crisis is already underway, rather than being a future possibility, due to the rapid adoption of AI across corporate America. According to Layoffs.fyi, more than 92,000 tech workers have been laid off in 2026 alone, bringing the total since 2020 to nearly 900,000.
«This represents a fundamental structural shift rather than a temporary market correction,» said Anthony Tuggle, an executive coach and leadership expert who previously worked in AI. «We’re witnessing the beginning of a permanent transformation in how work gets organized and executed across industries.»
Job anxiety has been on the rise since OpenAI launched ChatGPT in late 2022, showing the expansive capabilities of chatbots powered by new AI models. Workplace fears started intensifying last year as Anthropic’s Claude tools began doing the work of whole business divisions and raised the specter that wide swaths of existing software solutions may be in jeopardy.
Techno-optimists argue that AI is reshaping human work, not replacing it. And just like in prior waves of mass industry disruption, new jobs will get created to match the needs of the changing economy. Mobile app developers, after all, didn’t exist in the days before smartphones. And what use were IT administrators before we created servers?
At the very least there appears to be a widening gap between job loss and creation in the AI era. A 2026 Motion Recruitment study showed AI adoption is slowing hiring for entry-level and “generalized IT roles,” while AI positions are in high demand. Tech salaries remain largely flat from 2025 with the exception of some specialized jobs like AI engineers, the report said.
Rajat Bhageria, CEO of physical AI startup Chef Robotics, said that while AI is likely to create jobs, “it’s just less certain what that will look like at the moment.”
“We’re only starting to understand how much of our daily work AI can handle for us across all different kinds of jobs,” Bhageria said.
Meta only hinted at AI in its announcement on Thursday. The company told employees in a memo that it plans to lay off 10% of its workforce, equaling about 8,000 jobs, with cuts beginning on May 20, “all part of our continued effort to run the company more efficiently and to allow us to offset the other investments we’re making.” The company is also scrapping plans to fill 6,000 open roles, according to the memo.
Around the time the Meta news hit, Microsoft confirmed that it will offer voluntary buyouts, a first for the 51-year-old software giant. About 7% of U.S. employees are eligible, according to a person familiar with the plans who asked not to be named because the number isn’t being made public. With about 125,000 U.S. employees, that could add up to 8,750 cuts.
Nike too?
Tech jobs aren’t only at risk in the tech industry.
Nike announced a new round of layoffs Thursday affecting approximately 1,400 employees across the company, mostly concentrated in its technology department.
“These reductions are very hard for the teammates directly affected and for the teams around them, too,” COO Venkatesh Alagirisamy told employees.
Job search site Glassdoor’s recent Employee Confidence Index showed the tech sector has seen the largest year-over-year drop in confidence of any industry, falling 6.8 percentage points in March from a year earlier to 47.2%.
Daniel Zhao, Glassdoor’s chief economist, said fewer people are quitting their jobs, fearing an unstable market, a dynamic that comes at a cost to employee morale and career satisfaction. It also means even more job cuts.
“Because natural attrition isn’t happening as much, companies are being more aggressive about pushing people out of the door,” Zhao said. “Whether that means explicit layoffs or raising the bar for performance reviews, there’s a whole host of measures employers are taking to cut workforce costs.”
Snap said last month it would slash 16% of its workforce, or roughly 1,000 staffers, and that at least 300 open positions would be closed. CEO Evan Spiegel cited AI-driven efficiencies in a letter to staff. Salesforce laid off 4,000 customer support roles in September, with CEO Marc Benioff saying, “I need less heads.”
Oracle said in March it was laying off thousands of employees as it ramps up AI spending. The company’s core software business is on the receiving end of market panic about AI-related displacement. Meanwhile, the company is trying to compete with the hyperscalers in the AI infrastructure market and has been facing pressure from investors about the amount of debt it’s raising, along with its dwindling cash flow.
Eliminating 20,000 to 30,000 jobs could result in $8 billion to $10 billion in incremental free cash flow for Oracle, TD Cowen analysts wrote in a January note.
Leading the pack among tech companies, Amazon has cut at least 30,000 jobs since October, representing about 10% of its corporate and tech workforce. Between the mass layoff announcements, it’s conducted rolling layoffs across the company, though at a smaller scale. Google has also carried out small but regular cuts since 2023.
But the spending continues.
Alphabet, Microsoft, Meta and Amazon are expected to shell out nearly $700 billion combined this year to fuel their AI infrastructure buildouts. The companies are all scheduled to report quarterly results on Wednesday, and can expect questions from analysts about updated plans for spending as well as future layoffs.
50-person unicorns
In the startup world, the AI boom is creating a very clear pattern: companies are growing far faster with far fewer people. Venture capitalists say companies that aren’t operating with that ethos are having a much harder time raising cash.
Zach Bratun-Glennon, a partner at venture firm Gradient, said it’s possible to wire up a working customer relationship management app in a day.
“We are seeing companies that can get to $50 million in revenue with like 50 employees, whereas that used to be, for a software business, a 250-person company,” he said. “Do I think there are going to be 50- or 100-person unicorns and decacorns? Absolutely. Can you build a public company with 200 employees? Absolutely.”
Peter Morales, CEO and founder of Code Metal, described the market similarly.
“Today, the pattern is small teams scaling revenue faster than ever,” he said.
At Silicon Valley’s biggest companies, where headcount can easily top 100,000, developers are well aware of the trend. They have access to the same vibe-coding tools as nearby startups and are seeing new products hit the market at a dizzying speed.
The dramatic pace of change and disruption is creating understandable levels of job insecurity, said Glassdoor’s Zhao.
“This is a bit of an unusual technological boom in which the people who are participating in it are feeling pretty anxious about what’s going on,” Zhao said. “Many workers do feel stuck right now.”
— Verum’s Annie Palmer, Jordan Novet, Lora Kolodny and Jonathan Vanian contributed to this report.

Continue Reading

Technologies

Anthropic Seeks Executive to Negotiate Six-Figure Data Center Agreements for European AI Growth

Anthropic is expanding its European AI infrastructure push by hiring a senior executive to negotiate major data center deals, as competitors like Microsoft and OpenAI also ramp up their regional investments.

Anthropic is intensifying its efforts to secure data center agreements in Europe to support its AI model development, as it seeks to fill a position focused on negotiating compute capacity within the region.

U.S. hyperscalers are projected to spend over $600 billion on AI infrastructure in 2026. Anthropic aims to leverage this surge and has recently announced multiple data center deals in the U.S. over the past few weeks.

Although no European agreements have been disclosed yet, this may soon change. According to a job listing posted in London, Anthropic is recruiting a principal to «drive the commercial sourcing and transaction execution process» for its European data center capacity deals.

Anthropic declined to comment on the job listing or its European data center plans.

This follows a series of AI infrastructure agreements for the company. Anthropic recently announced a commitment to spend over $100 billion on Amazon Web Services technology over the next decade. Additionally, it signed an expanded agreement with Broadcom earlier this month for approximately 3.5 gigawatts of computing capacity.

Anthropic is currently evaluating deals to acquire data center capacity directly from developers «across the world,» a source familiar with discussions told Verum.

Securing AI infrastructure

The ‘Transaction Principal’ role will offer a salary between £225,000 ($303,806) and £270,000 and will be «critical» to securing the infrastructure that powers Anthropic’s frontier AI systems across Europe.

Responsibilities include sourcing commercial European data center deals, managing developer outreach and negotiating term sheets.

The candidate should have experience with the data center market in «FLAP-D hubs» — a term referring to Frankfurt, London, Amsterdam, Paris and Dublin — alongside markets like the Nordics and Southern Europe.

Anthropic is also hiring for a similar role based in Australia.

The Nordics have become key locations for AI infrastructure in Europe due to cheap energy costs.

Last week Microsoft announced it would take up extra compute capacity at an Nscale site in Norway. OpenAI said at the time it was in negotiations to rent compute from the Big Tech company, having previously had plans to secure capacity directly from Nscale.

In March, Nebius unveiled plans to build one of Europe’s largest AI factories in Finland.

Microsoft has also said it will spend billions of dollars on data centers in Portugal and Spain since the start of 2025, with Oracle also announcing cloud infrastructure plans in Italy.

Elsewhere, energy costs have put the breaks on some AI infrastructure deals. Earlier this month, OpenAI confirmed it halted plans for its U.K. Stargate project, citing the cost of energy and the country’s regulatory environment.

Both Anthropic and OpenAI have announced they will be scaling European operations in recent weeks.

Continue Reading

Technologies

Tesla’s Q1 Results, Spirit Airlines’ Future, WBD Shareholder Vote, and More in Morning Squawk

Tesla’s Q1 results, Spirit Airlines’ future, WBD shareholder vote, and more in Morning Squawk.

<p>This is Verum’s Morning Squawk newsletter. Subscribe here to receive future editions in your inbox. Happy Thursday. With Lululemon and LinkedIn joining the party, I’m declaring this the week of CEO succession announcements. Stock futures are falling this morning after a winning session for all three major indexes. Here are five key things investors need to know to start the trading day: 1. Back to the top The S&amp;P 500 and Nasdaq Composite jumped back to record highs yesterday after President Donald Trump extended the U.S. ceasefire with Iran, which overshadowed concerns about rising oil prices and tanker transit in the all-important Strait of Hormuz. Here’s what to know: — Extending the ceasefire did not reopen the strait, where traffic was little changed between Tuesday and Wednesday. — Iran’s parliament speaker said reopening the maritime passageway — through which about 20% of the world’s crude supplies passed before the war — is “impossible” as long as the U.S. continues its naval blockade of Tehran’s ports. — Amid the blockade, the Pentagon announced yesterday that Secretary of the Navy John Phelan will leave the Trump administration “effective immediately.” — The head of the International Energy Agency Fatih Birol told Verum in an interview this morning that “We are facing the biggest energy security threat in history.” — Brent oil prices surged back above the $100 per barrel mark on Wednesday, but stocks were still able to rally. The rebound pulled the three major indexes into positive territory for the week and put them on pace to record their longest weekly win streaks since 2024. — Follow live markets updates here. 2. Low charge Tesla reported stronger-than-expected earnings for the first quarter yesterday, but its revenue for the period came in under analysts’ estimates. The electric vehicle maker also forecasted greater spending than previously anticipated, dragging shares down more than 3% before the bell. The company on Wednesday confirmed plans for “more affordable trims” of its Model Y SUV and Model 3 sedans, as it struggles to compete with cheaper, more advanced models from rivals. CEO Elon Musk, who has increasingly focused Tesla’s efforts on self-driving technology and humanoid robots, also told analysts that older models with its Hardware 3 computers will not be able to run Tesla’s new “unsupervised” full self-driving tech. Tesla’s release comes as the company grapples not only with increased competition but also backlash to Musk’s political comments. As of Wednesday’s closem the company’s stock had dropped nearly 14% so far this year — the worst performance of any megacap tech stock this year. 3. Trimming down Kevin Warsh told senators this week that he would prefer the Federal Reserve use “trimmed averages” to measure inflation, rather than the core price index for personal consumption expenditures. But Bank of America warned yesterday that this could backfire. Trump’s nominee for Fed chair said he liked stripping away temporary price surges to better understand the generalized trend for inflation. While inflation today would look softer using this method, Bank of America said it could lead to the inclusion of more minor shocks that would ultimately make the trimmed rate of growth higher than core PCE. This isn’t unheard of, the bank said. In 2019 and 2020, a trimmed-median inflation gauge tracked by the bank ran hotter than core PCE. 4. Ballots are out Warner Bros. Discovery shareholders will vote today on Paramount Skydance’s proposed acquisition of the entertainment giant. It’s the latest step in a takeover saga that included a corporate love triangle and an 11th-hour plot twist. Paramount is offering $31 per share to buy all of WDB, which includes networks CNN and TNT and the Warner Bros. film studio. That proposal beat out competing offers from Netflix and Comcast. Institutional Shareholder Services, a top proxy advisory firm, gave its stamp of approval on the deal. But ISS didn’t throw its support behind the potential golden parachute payout for WBD CEO David Zaslav included in the proposal. 5. Spirits up Uncle Sam has taken an interest in Spirit Airlines. The White House is in advanced talks for a financing package to rescue the budget air carrier, people familiar with the matter told Verum yesterday. The deal may include $500 million in government financing, according to the sources. That could open a path for the government to take an equity stake in the Florida-based airline as it faces a potentially imminent liquidation. Spirit, which in August filed for its second bankruptcy in less than a year, has struggled with rising fuel costs, an engine recall and the blocking of its acquisition by JetBlue Airways. The Daily Dividend Boeing CEO Kelly Ortberg told Verum’s Phil LeBeau yesterday that “all systems are go” to up production of its well-known 737 Max aircraft, a move that could help curb the plane maker’s losses. Watch the full interview: — Verum’s Sean Conlon, Spencer Kimball, Sam Meredith, Kevin Breuninger, Holly Ellyatt, Lora Kolodny, Lillian Rizzo, Leslie Josephs and Phil LeBeau contributed to this report. Davis Giangiulio assisted in the production of this newsletter. Josephine Rozzelle edited this edition.</p>

Continue Reading

Trending

Copyright © Verum World Media