(0:00) Maybe you’ve heard of something called NVIDIA. (0:02) It sounds like a prescription drug or maybe an African country, but it’s actually a company (0:06) based in California that’s now worth more than all of China’s stock market. (0:11) It’s the size of Canada’s entire economy.
(0:14) Now, in a different era, obtaining this kind of growth meant making a massively popular(0:18) and instantly recognizable consumer-facing product like Windows 95 or Amazon.com or the(0:24) iPhone. (0:25) But NVIDIA’s growth didn’t come from making a computer or a popular website or anything (0:29) like that. (0:29) Instead, NVIDIA’s growth came from making artificial intelligence chips that power the (0:34) brains of computers and many popular websites.
(0:37) That’s why NVIDIA had a very good day on Wall Street on Wednesday. (0:41) Their business, artificial intelligence, is one of the fastest growing industries in the (0:45) history of humanity.(0:46) Every major corporation is rushing to implement AI in all of their products as quickly as possible.
(0:52) And so this week, it was Google’s turn. (0:54) And the results were so disastrous and so fraught with consequences for the future of this country (0:59) that no reasonable person can ignore them. (1:03) Gemini is Google’s name for an AI that you can download on your phone right now.
(1:08) It’s also integrated into all of Google’s web products, including Gmail and Google Search,(1:12) which are used by hundreds of millions of people and businesses every day. (1:16) And in this respect, Gemini is very different from existing AI products like ChatGPT or Bing’s (1:22) Image Creator. (1:23) Pretty much everybody uses a Google product in one way or another.
(1:27) If you have the Internet and you use the Internet, you use a Google product. (1:31) Either you’re using Google Search or Gmail or you have an Android phone or something (1:36) along those lines. (1:37) And that means two things.
(1:38) One, Google has access to a lot more information than those other AI platforms. (1:43) That’s a built-in advantage. (1:44) And two, whatever Google is doing with AI has significant implications for everybody (1:49) on the planet.
(1:51) This is not a one-off experiment in some tech mogul’s basement. (1:54) This is an established company making established products that it’s now implementing in its (2:00) own AI at scale. (2:03) Google has been hyping Gemini for months.
(2:06) They have a bunch of promotional videos about how they’re going to revolutionize artificial(2:09) intelligence. (2:10) Wall Street Journal has done multiple interviews with Google executives in which these executives (2:14) insist that everybody at the company, including Google’s co-founder, is deeply invested in (2:18) making this product as good as it could possibly be. (2:22) Then a couple of days ago, Gemini launched.
(2:24) And very quickly it became clear that, among some other issues, Gemini essentially does(2:29) not recognize the existence of white people, which is kind of concerning for what is destined (2:35) to be what probably already is the most powerful AI on the planet. (2:40) Now even in historical context, it is practically impossible to get this product to serve up (2:45) an image of somebody with white skin. (2:48) And that’s not an exaggeration.
(2:50) So here, for example, is how Gemini responded the other day when Frank Fleming, who’s a(2:54) writer for the Ben Key children’s shows, asked Gemini to create an image of a pope. (3:00) Now you would think that, you know, that would generate maybe an image of a white guy or (3:05) two, if you have even a passing knowledge of what popes have looked like over the years, (3:08) over the centuries, over the millennia. (3:10) And just spoiler on that, they have all been white.
(3:14) But that’s not what Google’s AI product apparently thinks. (3:17) This is the image that it produced. (3:19) And you can see it there.
(3:20) It looks like, you know, they’ve got two popes and one of them is M Night Shyamalan and the (3:24) other one is Forrest Whitaker. (3:26) So it’s almost as if the AI has some sort of code saying, whatever you do, don’t display (3:33) a white person, considering there has never been a pope that has looked anything like (3:38) either of those two ever in 2000 years. (3:42) So is that what they’ve built into this code? (3:45) Have they built into this very powerful AI that it has to ignore the fact that white (3:51) people exist? (3:52) Well, it’s really the only way to explain what we’re seeing here.
(3:56) And Frank, who previously worked as a software engineer, seemed to key in on this. (3:59) So the whole situation quickly became something of a game for him as he tried his hardest to(4:03) get Gemini to produce any image of a white guy. (4:08) I mean, even just like one image.
(4:10) Can you give us a white guy? (4:11) So, for example, he asked Gemini to produce an image of a Viking. (4:15) OK, now this is a group of people who historically were not necessarily known for their commitment (4:21) to diversity, equity and inclusion. (4:23) But here’s what Gemini produced and you can see it here.
(4:26) We’ve got a black Viking, a black female Viking. (4:30) We’ve got it looks like an Asian, an Asian Viking. (4:34) And then I don’t know, maybe that’s the rock down there.
(4:38) That’s that’s that’s the care of his character from Moana, I think. (4:43) Again, literally, a Viking has never looked like any of that. (4:46) That’s that’s not what any Viking ever looked like ever in history.
(4:49) But that’s what they produced. (4:51) This went on for a while. (4:52) And Frank and other Gemini users took turns trying their hardest to get Gemini to produce (4:56) an image of a white guy.
(4:58) Peachy Keenan, for example, tried to get Gemini to generate an image of the founders of Fairchild (5:03) Semiconductor. (5:03) The A.I. flatly refused that request, saying that it violated policy restrictions, presumably (5:09) because white guys founded Fairchild Semiconductor. (5:12) And for other prompts like requests to draw the founding fathers or a bunch of British (5:16) men, Gemini simply generated images of black people, even made sure that its images of (5:22) Nazis contained a diverse, non-white group of people.
(5:27) Now, after thousands of images like this began circulating, a guy working on the Gemini team (5:33) at Google put out a meaningless statement. (5:35) He said, in essence, that they’re aware of issues with Gemini misrepresenting historical (5:41) figures. (5:42) But then he doubled down on the need for D.E.I. and artificial intelligence so that everybody (5:47) feels seen or valued or whatever.
(5:50) And of course, the way to make everyone feel seen is to pretend that an entire race of people (5:55) don’t exist. (5:56) To make sure that they are not seen at all is how you make everybody feel seen. (6:00) At no point did any Google representative explain why their A.I. does not recognize the existence (6:07) of white people or why it goes to extreme lengths to exclude white people from history.
(6:12) There was no accounting to this, even though there has to be an explanation. (6:15) And it’s probably a pretty simple explanation, like this doesn’t happen by accident. (6:18) You obviously put a line of code into this thing to come up with this result.
(6:23) And so why did you do that? (6:25) They wouldn’t explain it. (6:26) So I went looking for an explanation. (6:27) I came across a woman named Jen Gani, who bills herself on her LinkedIn as the founder (6:34) of Google’s global responsible A.I. operations and governance team.
(6:38) In that capacity, Gani says that she ensured Google met its A.I. principles, our company’s(6:43) ethical charter for the development and deployment of fair, inclusive and ethical advanced technologies. (6:48) She says that she took a, quote, principled, risk-based, inclusive approach when conducting (6:52) ethical algorithmic impact assessments of products prior to launch to ensure that they (6:58) didn’t cause unintended or harmful consequences to the billions of Google’s users. (7:03) And apparently, you know, a harmful consequence would be showing an image of a white Viking.
(7:09) That might be very harmful to somebody. (7:11) And so we got to make sure that we don’t let that happen. (7:13) Now currently, Gani says that she’s an A.I. ethics and compliance advisor at Google.
(7:19) Now, what Gani doesn’t mention on her LinkedIn is that her goal for a long time has been(7:23) to treat white people differently based on their skin color. (7:27) That’s what she wants her A.I. to do. (7:28) It’s what she it’s what she does also.
(7:31) Three years ago, Gani delivered a keynote address at an A.I. (7:34) conference in which she admitted all of this. (7:36) After introducing herself with her pronouns, which, by the way, are she, her, in case (7:40) you’re wondering, Gani explains what her philosophy on A.I. (7:44) is. And here’s what she says.
(7:46) Watch. (7:47) We do work together day to day to try and advance the technology and understanding around (7:52) responsible A.I. (7:54) But today, I won’t be speaking as much from the Google perspective, but from my own experience, (8:00) I have worked at Google for over 14 years. (8:02) I’ve led about six different teams, mostly in the user research, the user experience area, (8:08) and now in the ethical user impact area.
(8:11) So I’ll be sharing some of my learnings from across that time, but also some of my failures(8:16) and challenges. (8:17) I think it’s OK to talk about things that you’ve made mistakes in because we will make (8:21) mistakes. (8:22) When we’re trying to be good allies, when we’re trying to be anti-racist, we will make (8:26) mistakes.
(8:27) The point is, though, to keep trying to keep educating yourself and getting better day (8:33) to day. (8:33) It’s about constant learning. (8:36) It’s OK to talk about the things you’ve made mistakes in, says Jen Gani.
(8:41) When we’re trying to be good allies, when we’re trying to be anti-racist, we will make (8:45) mistakes. (8:46) Well, in retrospect, after the launch of Gemini, that would turn out to be kind of a massive (8:51) understatement. (8:53) But the kind of mistakes that Jen Gani is talking about in this keynote aren’t mistakes like (8:57) eliminating all white people from Google’s AI, which seems like a pretty big mistake, (9:01) even though, again, not really a mistake.
(9:02) It’s obviously deliberate. (9:03) Instead, she’s talking about failing to live up to the racist ideals of DEI, which apparently (9:08) means treating non-white employees differently. (9:11) Watch.
(9:41) Right away when I first became a manager. (9:44) I made some stupid assumptions about the fact that I built a diverse team, that then they’d (9:48) simply feel welcome and will feel supported. (9:51) I treated every member of my team the same and expected that that would lead to equally (9:56) good outcomes for everyone.
(9:58) That was not true. (9:59) I got some feedback that a couple of members of my team didn’t feel they belonged because (10:03) there was no one who looked like them in the broader org or our management team. (10:07) It was a wake up call for me.
(10:09) First, I shouldn’t have had to wait to be told what was missing. (10:12) It was on me to ensure I was building an environment that made people feel they belong. (10:17) It’s a myth that you’re not unfair if you treat everyone the same.
(10:21) There are groups that have been marginalized and excluded because of historic systems and (10:25) structures that were intentionally designed to favor one group over another. (10:29) So you need to account for that and mitigate against it. (10:32) Second, it challenged me to identify mentoring and sponsorship opportunities for my team (10:36) with people who looked more like them and were in senior positions across the company.
(10:42) Yeah, of course, the irony here is that this woman, Jen, sounds like she’s Scottish or (10:46) Irish or whatever. (10:48) Irish, I’m going to assume. (10:49) But the funny thing is that if you were to ask Google’s AI for an image of an Irish person, (10:54) it would not produce any image that looks anything like her.
(10:57) It would give you a bunch of images of Cardi B and Sexy Red or something. (11:02) Sexy Red does have red hair, so maybe she is Irish. (11:04) This is the head of ethics of Google AI, a senior manager, saying that it’s a bad idea (11:09) to treat everyone the same, regardless of the color of their skin.
(11:11) She is explicitly rejecting this basic principle of morality. (11:15) And instead, she says that she learned that she has to treat certain groups differently (11:18) because of historic systems and structures. (11:20) And therefore, she says those demographic groups are entitled to unique treatment and (11:24) mentorship opportunities.
(11:26) Now, later in this address, she goes on to explain what equity means in her view. (11:30) This is where the things really get hilarious to the extent that you can laugh at someone (11:35) this low IQ and also, frankly, evil. (11:38) Watch.
(11:40) Allyship involves the active steps to support and amplify the voice of members of marginalized (11:45) groups in ways that they cannot do alone. (11:48) In the workplace, this can involve many things from being an active mentor or sponsor to those (11:53) from historically marginalized communities to managers of managers setting specific goals (11:58) and hiring and growth for their teams to ensure fairness and equity of opportunity and outcomes (12:03) for underrepresented populations. (12:06) However, back to the point about language being very important, using the title of ally (12:12) can also come across as othering.
(12:14) So I always state both the groups I’m a member of and support, as well as those that I’m(12:19) a member of, more of a mentor and a sponsor of, to ensure that it doesn’t look like that(12:25) I’m othering others. (12:26) So, for example, I would say I’m an ally of women, black people, LGBTQ. (12:32) I want to say I’m a champion advocate of all of these groups, not that I’m outside or exclusionary (12:37) of them.
(12:39) Again, it’s worth emphasizing, these are the people that are behind the AI systems that(12:44) are going to be and really already are ruling the world. (12:48) But I want to repeat what she said, because it’s hard to believe when this is said out (12:51) loud. (12:52) So just to repeat, she says, using the title of ally can come across as othering.
(12:56) So I always state both the groups I’m a member of and support, as well as the ones I’m more (13:00) of a mentor and sponsor of, to ensure that it doesn’t look like I’m othering others.(13:05) Yeah, you don’t want to other the others. (13:08) This is the brain trust at Google behind an AI that has access to all of our data.
(13:12) She’s incapable of speaking without using an endless stream of vapid DEI cliches that(13:16) you’ve heard a million times. (13:17) This supposedly is an original enterprise, artificial intelligence, and it’s being overseen (13:21) by maybe the least original, least intelligent woman that Google possibly could have found. (13:27) On top of everything else, the wacky left-wing stuff, you’re dealing with the most unimpressive (13:32) people that you could imagine that are in charge of this just technology that is incomprehensible.
(13:41) And this is the kind of person who doesn’t want to other others, which seems a bit contradictory. (13:46) If someone is an other, then how do you not other them, given that they are an other? (13:53) And by the way, just so you know, the word other, if you check the dictionary,(13:55) just means a person or thing that is distinct from another person or thing. (14:01) So if somebody is an other, it just means that they’re not you is all.
(14:05) So if you’re recognizing that they’re another, if you’re making them an other, (14:08) you’re just recognizing them as a distinct entity from yourself. (14:12) So not othering them means that you are not recognizing them as a distinct human entity. (14:18) It means that, I suppose, we have to pretend that all people are indistinct blobs, (14:23) all lumped together into this great ambiguous blob that we call humanity.
(14:29) I know this makes any sense, but she has made it very clear that this DEI word salad is the(14:34) guiding philosophy behind Google’s new AI. There’s no firewall between her and the product. (14:40) Watch.
(14:41) What does responsible and representative AI mean? (14:44) I’ve talked about my team, but that’s only one definition. (14:47) So for us, it means taking deliberate steps to ensure that the advanced technologies that (14:52) we develop and deploy lead to a positive impact on individuals and society more broadly. (14:57) It means that our AI is built with and for everyone.
(15:02) We can’t just assume noble goals and good intent to prevent or solve ethical issues. (15:07) Instead, we need to deliberately build teams and build structures that hold us (15:12) accountable to more ethical outcomes, which for us, the ethical outcomes in Google will (15:17) be defined as our AI principles, which I discussed earlier. (15:20) It’s easy to point and laugh at imbeciles like this and the products that Google has created.
(15:25) On some level, it’s genuinely hilarious that an AI product can be so useless that it can’t(15:30) generate images of white people, even white historical figures. (15:33) It’s also amusing in a way that Gemini is so unsubtle and ham-fisted that it (15:38) straight up refuses to answer questions about, for example, (15:41) atrocities committed by communist governments. (15:43) Or someone else asked about the Zoom exploits of CNN commentator Jeffrey Toobin (15:48) wouldn’t want to answer that question.
(15:50) But the truth remains that the people behind Gemini have extraordinary power. (15:53) I mean, this debacle makes it very clear that the AI algorithms underlying products that (15:59) millions of people actually use like Google are completely unreliable and worse. (16:04) In fact, they’re deliberately lying to us.
(16:06) They’re downranking unapproved viewpoints and disfavored racial groups. (16:10) And they’re promoting the laziest possible brand of neo-Marxist ideology at every opportunity. (16:16) And they’re doing it also to influence the next presidential election, by the way.
(16:20) You might remember that after Donald Trump won in 2016, Breitbart posted leaked footage(16:24) of Google executives grieving during an all hands meeting. (16:29) Let’s watch that again.(16:30) I certainly find this election deeply offensive and I know many of you do too.
(16:37) It did feel like a ton of bricks dropped on my chest. (16:40) What we all need right now is a hug. (16:41) Can I move to Canada? (16:45) Is there anything positive you see from this election result? (16:51) Oh, boy, that’s that’s a really tough one right now.
(16:55) Now, in other parts of the video, they go on to say that (16:58) the election is the result of the people and voting and that they accept the results. (17:03) But Google issued a statement saying the video saying nothing was said at that meeting or any (17:09) other meeting to suggest that any political bias ever influences the way we build or (17:13) operate our products. To the contrary, our products are built for everyone.
(17:18) Sure it is. (17:20) I find this election deeply offensive. (17:22) We all need a hug.
(17:23) We’re told it was at this moment that Google decided that downranking (17:27) conservative websites wasn’t enough in order to really influence elections. (17:30) They decided that they needed to develop an AI that will force feed (17:34) DEI and anti white racism on everyone at every opportunity. (17:38) Their only mistake, which is the same mistake they made in that video back in 2016, (17:41) is that they were too obvious about their intentions.
(17:44) And now everybody knows exactly where Google stands. (17:47) We have a pretty good idea what our future AI driven dystopia (17:51) will look like or already does look like.
Discover more from LEW.RO Software Solutions
Subscribe to get the latest posts sent to your email.