Connect with us

Technologies

‘Don’t be evil’: Google’s iconic mantra comes into question at labor trial

The ethos has set Google apart from other companies for decades. It’s under the spotlight again.

Last month, software engineer Kyle Dhillon said during a labor board trial that «Don’t be evil,» Google’s famous corporate mantra, had lured him to the tech giant five years ago.

The motto appealed to the Princeton grad because it showed Google was aware of its own power. It underscored, Dhillon said, the delicate work it takes to keep a big company like Google honest.

«Recognizing ‘Don’t be evil’ as one of its core values shows that it’s aware it’s possible for us to become evil,» Dhillon told a National Labor Relations Board attorney in response to a question about whether the motto played a role in his decision to join the search giant. «And it would be quite natural, in fact.»

The brief exhortation, which Google has deemphasized in recent years, is now a focal point in an NLRB complaint against the company that alleges the tech giant wrongly fired five employees for their labor activism. The employees had protested actions by Google, including its hiring of a consultancy with a history of anti-union efforts and its work with US Customs and Border Protection. Dhillon isn’t one of the fired employees, but he received a final warning from the company that the NLRB contends was illegal.

By untangling Google’s labor policies, the proceedings have shined a light on the tech giant’s famous work culture, which in turn has prompted a close look at Google’s iconic mantra. The result has been a public rumination on the company’s North Star set against the backdrop of a high-profile legal forum.

The tech giant has denied wrongdoing. The trial, which began on Aug. 23, is ongoing. One of the fired employees, Laurence Berland, has privately settled with the company.

Google isn’t alone in adopting an unorthodox mantra. Apple’s grammatically distinctive «Think different» advertising campaign was eventually embraced as a de facto corporate motto. Facebook’s former motto was «Move fast and break things,» an expression evoking permission — celebration even — of recklessness. Still, Google’s corporate motto has always been an outlier. It’s simultaneously tongue in cheek, befitting a company that pioneered freewheeling workplace culture with free food and slides in lobbies, yet powerfully solemn.

And so with it came a higher standard, said Irina Raicu, director of the Internet Ethics Program at Santa Clara University’s Markkula Center for Applied Ethics.

«It raised employee expectations that the company would be different,» Raicu said. «It invited a certain kind of employee to join.»

Google didn’t respond to a request for comment.

‘A jab at other companies’

Like any piece of great folklore, differing accounts of who coined «Don’t be evil» are told. But credit is usually given to Paul Buchheit and Amit Patel, two early Google employees. Buchheit, who created Gmail, has said he came up with the slogan during a meeting in early 2000 to define company values.

«I was sitting there trying to think of something that would be really different and not one of these usual ‘Strive for excellence’ type of statements,» Buchheit said in 2007. «It’s also a bit of a jab at a lot of the other companies, especially our competitors, who at the time, in our opinion, were kind of exploiting the users to some extent.»

After the meeting, Patel began writing the phrase on whiteboards around Google’s Mountain View, California, campus, trying to make the slogan stick. It did. The phrase eventually made it into Google’s code of conduct. It’s now one of the best-known corporate slogans in the world.

Buchheit and Patel didn’t respond to multiple requests for comment.

Since its inception, the motto has expanded from a guiding principle for product development and policies to a rallying cry for Google’s critics, some of the toughest being the company’s own workers. Employees say the mantra has served as the linchpin for some of the workforce’s most notable protests. That includes activism regarding now-shuttered plans for a censored Chinese search product, a contract with the Pentagon for tech that could improve the accuracy of drone strikes, and the company’s handling of sexual misconduct claims directed at senior executives. At some demonstrations, workers have held up signs that say «Don’t be evil.»

As Google has grown bigger and increasingly steeped in controversy, its dedication to the mantra has repeatedly come under question. Last week, The New York Times and The Guardian reported that Google knowingly underpaid temp workers, but decided not to fully correct the situation because it feared negative press attention. In response, Google workers wrote an open letter to leadership, including CEO Sundar Pichai, demanding the company fork over the $100 million in back pay it allegedly owes its temps.

«For much of Google’s workforce, ‘Don’t be evil’ is a smokescreen,» the letter says. «It’s a way to reap the financial rewards of unquestioning public faith, by assuring investors, users and government entities that Google is trustworthy and friendly — while successfully underpaying and mistreating the majority of their workers.»

‘It’s not enough not to be evil’

In 2004, as Google prepared to go public, co-founders Larry Page and Sergey Brin expounded on the motto in an interview with Playboy. The interview is excerpted in Google’s prospectus filing.

Brin: As for «Don’t be evil,» we have tried to define precisely what it means to be a force for good—always do the right, ethical thing. Ultimately, «Don’t be evil» seems the easiest way to summarize it.

Page: Apparently people like it better than «Be good.»

Brin: It’s not enough not to be evil. We also actively try to be good.

That attitude still resonates with Google’s rank and file today. At the labor board trial, Sophie Waldman, one of the employees who was allegedly wrongfully terminated, said it’s what attracted her to the company in the first place. «That was an important factor,» Waldman testified. «I’ve always cared a lot about making sure my work has a positive, or at the very worst, neutral impact on the world.»

Waldman said she kept the phrase in mind as she went on with her everyday work of trying to improve search results. Other employees also talked about the practical applications of the mantra, as opposed to just a pie-in-the-sky ideal.

«It made it sound like the company had somewhat of a conscience,» said Eddie Gryster, a Google software engineer. «It meant to me that at the time Google was basically saying, ‘Hey, that is good business for us to not be evil,’ and to do the right thing helps us maintain trust with users.»

Some people worry that Google, with its trillion-dollar valuation and headcount of more than 135,000 full-time employees, is moving away from that ethos. In 2015, after Page and Brin created Alphabet, a holding company for Google, the phrase was moved from the beginning of Google’s code of conduct to the end of it. Critics saw it as a demotion of the principle, an afterthought in the last sentence of a 6,500-word document. «And remember… don’t be evil, and if you see something that you think isn’t right – speak up!» the guidelines say.

The broader code of conduct for Alphabet makes no mention of the phrase.

The cynical view is that such a mantra is outdated in modern Silicon Valley, as the industry struggles to contain disinformation, election interference and other abuses. Still, Google employees have taken «Don’t be evil» to heart, as well as the last two words of the revised code of conduct: speak up. They did so by engaging in legally protected actions, the NLRB argues.

So, employees say, the mantra is at the core of why Google is on trial in the first place.

Technologies

Today’s NYT Mini Crossword Answers for Tuesday, Oct. 14

Here are the answers for The New York Times Mini Crossword for Oct. 14.

Looking for the most recent Mini Crossword answer? Click here for today’s Mini Crossword hints, as well as our daily answers and hints for The New York Times Wordle, Strands, Connections and Connections: Sports Edition puzzles.


Today’s Mini Crossword has an odd vertical shape, with an extra Across clue, and only four Down clues. The clues are not terribly difficult, but one or two could be tricky. Read on if you need the answers. And if you could use some hints and guidance for daily solving, check out our Mini Crossword tips.

If you’re looking for today’s Wordle, Connections, Connections: Sports Edition and Strands answers, you can visit CNET’s NYT puzzle hints page.

Read more: Tips and Tricks for Solving The New York Times Mini Crossword

Let’s get to those Mini Crossword clues and answers.

Mini across clues and answers

1A clue: Smokes, informally
Answer: CIGS

5A clue: «Don’t have ___, man!» (Bart Simpson catchphrase)
Answer: ACOW

6A clue: What the vehicle in «lane one» of this crossword is winning?
Answer: RACE

7A clue: Pitt of Hollywood
Answer: BRAD

8A clue: «Yeah, whatever»
Answer: SURE

9A clue: Rd. crossers
Answer: STS

Mini down clues and answers

1D clue: Things to «load» before a marathon
Answer: CARBS

2D clue: Mythical figure who inspired the idiom «fly too close to the sun»
Answer: ICARUS

3D clue: Zoomer around a small track
Answer: GOCART

4D clue: Neighbors of Norwegians
Answer: SWEDES

Continue Reading

Technologies

Watch SpaceX’s Starship Flight Test 11

Continue Reading

Technologies

New California Law Wants Companion Chatbots to Tell Kids to Take Breaks

Gov. Gavin Newsom signed the new requirements on AI companions into law on Monday.

AI companion chatbots will have to remind users in California that they’re not human under a new law signed Monday by Gov. Gavin Newsom.

The law, SB 243, also requires companion chatbot companies to maintain protocols for identifying and addressing cases in which users express suicidal ideation or self-harm. For users under 18, chatbots will have to provide a notification at least every three hours that reminds users to take a break and that the bot is not human.

It’s one of several bills Newsom has signed in recent weeks dealing with social media, artificial intelligence and other consumer technology issues. Another bill signed Monday, AB 56, requires warning labels on social media platforms, similar to those required for tobacco products. Last week, Newsom signed measures requiring internet browsers to make it easy for people to tell websites they don’t want them to sell their data and banning loud advertisements on streaming platforms. 

AI companion chatbots have drawn particular scrutiny from lawmakers and regulators in recent months. The Federal Trade Commission launched an investigation into several companies in response to complaints by consumer groups and parents that the bots were harming children’s mental health. OpenAI introduced new parental controls and other guardrails in its popular ChatGPT platform after the company was sued by parents who allege ChatGPT contributed to their teen son’s suicide. 

«We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability,» Newsom said in a statement.


Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


One AI companion developer, Replika, told CNET that it already has protocols to detect self-harm as required by the new law, and that it is working with regulators and others to comply with requirements and protect consumers. 

«As one of the pioneers in AI companionship, we recognize our profound responsibility to lead on safety,» Replika’s Minju Song said in an emailed statement. Song said Replika uses content-filtering systems, community guidelines and safety systems that refer users to crisis resources when needed.

Read more: Using AI as a Therapist? Why Professionals Say You Should Think Again

A Character.ai spokesperson said the company «welcomes working with regulators and lawmakers as they develop regulations and legislation for this emerging space, and will comply with laws, including SB 243.» OpenAI spokesperson Jamie Radice called the bill a «meaningful move forward» for AI safety. «By setting clear guardrails, California is helping shape a more responsible approach to AI development and deployment across the country,» Radice said in an email.

One bill Newsom has yet to sign, AB 1064, would go further by prohibiting developers from making companion chatbots available to children unless the AI companion is «not foreseeably capable of» encouraging harmful activities or engaging in sexually explicit interactions, among other things. 

Continue Reading

Trending

Copyright © Verum World Media