[ad_1]
This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.
I had a big moment over the weekend.
What was that?
I wore my Apple Vision Pro on an airplane for the first time.
Oh my God. Well, you don’t appear to have any visible injuries on you. It doesn’t look like you were assaulted.
[LAUGHS]:
So how did that go?
Well, it went great. So I pull it out. I put it on. I connect to the Wi-Fi just like Joanna Stern told us last week it was possible to do. And I watch “Suits” in like IMAX full-size. Like the window that I am — that I’m watching “Suits” in is as big as the freaking plane.
Meghan Markle has never been larger.
Exactly. But I ran into a dilemma, which was you know how you can turn the dial for immersion?
Yes.
So you can either turn it so that you can see everything around you, and you just have this floating window with “Suits” playing. Or you can turn the dial all the way to the other side, in which case you don’t see anything around you, and just can pick your background. I was on the surface of Mars. So —
But what’s the case for wanting to see what’s around you?
So here we go. So I immersed myself. I’m on the surface of Mars. I’m watching “Suits.” It works great except I missed the drink cart.
Oh, no.
I don’t see the drink cart coming by, and I miss my drink.
That’s so great. Because the flight attendant could have just tapped you on the shoulder and said, do you want a drink? But instead, they made the right choice and said, screw this guy. He wants to look like that, he can get his own damn drink.
So this is the dilemma of flying in the Vision Pro, I have learned.
No, this is not actually a dilemma. If you want a drink, you need to engage with reality, my friend. The choice is yours.
[THEME MUSIC]
I’m Kevin Roose, a tech columnist at “The New York Times.”
I’m Casey Newton from Platformer.
And this is “Hard Fork.”
This week, a bill that could ban TikTok passes the House of Representatives. What’s next? Then we’ve got to ask, does Kate Middleton actually know how to Photoshop? And finally, “The New York Times” Kashmir Hill joins us with an investigation into how your car may be spying on you.
Rev it up.
Personally, I’d pull over.
[THEME MUSIC]
All right, Casey, this week, we have to talk about what is happening with TikTok, because it has been a very big week for that app, and I would say for social media in general.
Yeah, there have been a lot of moves over the years to maybe ban TikTok, but what we have seen this week is the most serious of those moves that we’ve seen so far.
So on Wednesday of this week, the US House of Representatives voted to pass something called the Protecting Americans from Foreign Adversary Controlled Applications Act, or PAFACA! So this bill passed the House of Representatives on Wednesday with a vote of 352 to 65, so pretty overwhelming bipartisan support for this bill.
It’s a bill that would essentially require ByteDance, the Chinese conglomerate that owns TikTok, to sell it. So, Casey, let’s talk about what this bill means, and how we got here, and what the implications are. But first, let’s just say what’s in the bill.
So two things — one, if it gets signed into law, it requires ByteDance to sell TikTok within 180 days. And two, if ByteDance chooses not to do that, the bill creates a process that would ultimately let the president decide whether the app should be banned on national security grounds.
Right. So basically, if you have TikTok on your phone and this bill passes, 180 days from when it passes, your app will not be able to get updates anymore, and it won’t be available in the app stores.
Mhm.
So let’s just remind each other how we got here, because this is not a new topic. As you remember, Donald Trump, when he was the president, tried to ban TikTok or force a sale of TikTok. That came close to happening, but then fell apart in the late stages of that process.
There have been other attempts to ban TikTok. Montana actually passed a law banning TikTok within the state. That was overturned by a court. So this has been a long process, and a lot of different organizations and lobbying groups have been pushing for a TikTok ban for years. But why do you think this is coming to a head now?
Well, in a way, there have been a series of events that brought us to this moment. Over the past year, TikTok was banned on federal devices. A number of states moved to say that, hey, if you are a state employee, we’re going to take this thing off of your phone.
Behind the scenes, ByteDance was having a bunch of conversations. They tried to implement this program called Project Texas, which would try to silo Americans’ data and create a bunch of assurances, essentially, that TikTok could not be used for evil here in the United States.
And all of those things just fell on deaf ears, and I think as we have begun to approach our election here in the United States, Kevin, lawmakers are increasingly concerned about what’s happening. And one thing we learned is that the Biden administration, which really wants to ban TikTok, gave a classified briefing to members of Congress recently in which they made the case — we don’t know exactly what was said at this briefing, but whatever it is seemed to really motivate a lot of members of Congress to get this thing out of the App Store.
Yeah, I think that’s a good point. The Project Texas thing that we’ve talked about on the show before was not successful in terms of convincing Americans and American lawmakers that TikTok was no longer a potential threat to national security. I also remember you went down to a TikTok transparency center, which they were giving tours of to various reporters and lawmakers and skeptics to try to say, look, we are being transparent. We are letting people see our algorithms so they can see there’s no nefarious Chinese plot to seed propaganda in there. That ultimately did not appear to be convincing to that many people either.
Yeah, I mean it put TikTok in this position of having to continuously prove a negative, which is that it had never been used to do anything bad and never would. And it’s just really hard to do that.
Yeah, and I think for a lot of people, including me, the assumption was that there would be this tension over TikTok but that ultimately nothing would happen. But now it appears there is this real bipartisan effort that may actually succeed. And the question of why is this happening now is really interesting, and I think it has a lot to do, frankly, with something that we haven’t talked about much on this show, which is the conflict in Israel and Gaza, which has brought new attention to TikTok in part because there’s a coalition of people in Washington who believe that TikTok is being used to turn American public opinion against Israel.
And there has been some viral analysis that showed that pro-Palestinian videos on TikTok were dramatically outperforming pro-Israel videos, and “The Wall Street Journal” reports that this was a big issue that caught the attention of lawmakers on both sides of the aisle who said this app is a problem. It is basically helping to brainwash American youth into not supporting Israel, which I think is dubious for all kinds of reasons, but that does appear to have been a factor here.
Yeah. Around this time, there was a tool that advertisers could use to track the performance of various hashtags, and some researchers used it to see that videos with pro-Palestine hashtags appeared at times to be getting more than 60 times more views than videos with pro-Israel hashtags. We don’t know why, and there’s not really any evidence that TikTok was putting its thumb on the scale. But that research really seems like it scandalized Congress and once again drew attention to the fact that if someone at TikTok or ByteDance or the Chinese Communist Party did want to put their thumb on the scale, there was absolutely nothing to stop them.
And this has been the core problem that TikTok has had from the start, which is that even if it does nothing wrong, there is always the potential that the Chinese Communist Party could force them to.
Right. So let’s talk about the process from here. So the first thing that is needed to turn this bill, PAFACA, into law is that it needs to pass the House. That has happened. Now the next step is it needs to be introduced to and passed by the Senate. Do you think that’s likely to happen too?
Well, actually, maybe not. There’s been some reporting in “The Washington Post” this week that Senator Rand Paul has come out and said that he does not intend to support this bill. He thinks that Americans should be free to use whatever social media apps they want to, and he just does not see the need for this app to be banned.
I would also say that Chuck Schumer, who is the Senate Majority leader and a Democrat, seems pretty wobbly on this one as well. He has not committed to bringing this thing to a floor vote, and I believe — talk to lobbyists on Capitol Hill, and they’ll tell you that Chuck Schumer is a big reason that a lot of tech bills don’t get passed, because he just doesn’t believe that they need to be regulated much at all.
So because of those two reasons, and the fact that there is still no companion bill in the Senate, yeah, Kevin, I think this one does have long odds ahead of it. But what do you think?
I think it could pass the Senate. I think I’ve been surprised at the bipartisan support. We’ve seen a few lawmakers come out vocally in defense of TikTok or at least in opposition to forcing it to be sold. But the majority of Congress people have signaled that they would support this. If it is passed by the Senate, it would then move to President Biden who would need to sign it. The White House has indicated that he would. And from there, then the bill would need to survive legal challenges, which ByteDance has signaled it will mount. If this bill is passed, they will try to stop this in court. So the bill would need to overcome that challenge.
But if it does — say all of that happens, and this bill is passed and holds up in court, it would give ByteDance about six months to undertake a sale of a massive tech product that it really doesn’t want to sell.
Yeah, and when you talk to the folks at ByteDance, they will say, make no mistake, this is not about a sale. This is a de facto ban of TikTok in the United States. And I believe the reason that they are saying that is that the Chinese government has given indications that it will not allow ByteDance to divest TikTok. And so ByteDance will effectively have no choice but to stop offering the app in the United States.
Yeah, so TikTok has obviously not taken this news of this bill lying down. They have mounted an aggressive lobbying campaign in Washington. They have a huge lobbying team there that is presumably fanning out over Capitol Hill, trying to convince lawmakers to drop their support for this bill. They also have started mobilizing users.
So this week, TikTok sent a push notification to many, many of its United States users that urged them to call their representatives and called this law a quote, “total ban on TikTok,” which is not totally true. It doesn’t ban TikTok, it just forces ByteDance to sell it. But the company wrote in this notification that this bill would, quote, “damage millions of businesses, destroy the livelihoods of creators across the country, and deny artists an audience.”
Did you get this notification?
I did not because I do not enable TikTok to send push notifications. Because, like every other app, it sends too many.
Right. So this was shown to users who opened their apps, and it did apparently result in a flood of calls to congressional representatives and their offices. One Congressman, Florida’s Neal Dunn, told the BBC that his office had received more than 900 calls from TikTokers, many of which were vulnerable school-aged children, and some of whose extreme rhetoric had to be flagged for security reasons.
Which I don’t understand. Were they threatening the congressperson if TikTok were banned?
I’m going to assume that, yes, they were absolutely threatening the congressperson.
So a flood of kids contacting their representatives to complain about this bill. What do you think about this strategy?
Well, we have seen it used effectively before. Uber would do this in cities that were threatening to ban Uber. They would show information in the app saying, hey, why don’t you call your representative here? We’ve seen Facebook and Google put banners in the app, talking about issues of concern to them, net neutrality and other things.
And this has always been pretty non-controversial, actually. And it has demonstrated that these apps do have really dedicated user bases. They want to keep using these apps. And so I was laughing this week, Kevin, when Congress was just so outraged at the fact that some of their constituents called them to express an opinion about a bill that was before them. You know?
Right. I do think it was an interesting strategy, in part because one of the charges that TikTok and ByteDance are trying to dodge is this idea that they could be used as secret tools of political influence. And one of the ways that they are responding to this is by becoming a non-secret tool of political influence, literally trying to influence the political process in the United States with these push notifications.
But that aside, I do think this is a playbook we have seen before from other companies that are being challenged by new regulation.
But I just want to say, again, is the message from Congress that they don’t want to hear from their constituents about this? Is it like, only call us if you’re a registered lobbyist? Like, is that what they’re telling us? Because that kind of sucks!
Yeah, it kind of sucks, but it also is the case that these congressional offices are not set up to handle the volume of incoming calls that they got.
Who cares?
I don’t know. You try getting 900 calls from angry teens. See how you like it.
Why else do the phones exist in the offices of Congress if not to solicit constituent feedback? Is like, oh, what, like your DoorDash order is at the front door? Is that the message that wasn’t getting through? I don’t understand this.
I don’t know. I think you should be able to text your congressperson. Because they text us so often when they’re fundraising or they’re trying to get elected. I think it would be turnabout is fair play.
I agree with that.
So another wrinkle in this TikTok story is what has been happening with Donald Trump. Because Donald Trump, obviously no fan of TikTok. Under his administration tried to force the app to be sold, much like the Biden administration has done this time around.
But he’s flip-flopped. He did a TikTok flip-flop, and he now says that he does not support banning TikTok in the US. He told CNBC on Monday that, quote, “there’s a lot of good and a lot of bad with TikTok.” And he also said that if TikTok were banned, it would make Facebook bigger and that he considers Facebook to be an enemy of the people.
So, Casey, why is Donald Trump now changing his tune on TikTok?
Well, look, you’re always on shaky ground when you try to project yourself into the mind of Donald Trump. But we a couple of things. One is that there is this billionaire with the incredible name of Jeff Yass. Yass, Queen! Jeff Yass is a very rich person and big donor who recently befriended Trump. And Yass’s company owns a 15 percent stake in ByteDance.
And he, we believe, has been lobbying Trump behind the scenes, and the thought there, the suspicion there is that there is some sort of quid pro quo. It’s like, hey, you leave TikTok alone. I’m going to be a big backer to your campaign at a time when you desperately need money.
Yeah, so Donald Trump has officially gotten yass-ified.
This is a really — this is a story about the yassification of Donald Trump.
So another factor here is that a lot of people in Donald Trump’s camp, including Kellyanne Conway, his former advisor, have been lobbying him in TikTok’s defense. “The Washington Post” also reported recently that part of Donald Trump’s antipathy toward Facebook in particular has been fueled by watching a documentary about how Mark Zuckerberg’s donations to election integrity causes in 2020 helped fuel his defeat.
According to “The Washington Post,” Donald Trump watched this documentary about Mark Zuckerberg’s political donations, got very mad about it, and has ever since been opposed to Facebook on any grounds. Obviously, banning TikTok in the US, one of the main beneficiaries of that would be Facebook and Meta because they have competing short form video apps like Instagram Reels.
Yeah, and we should say, the money Zuckerberg donated was to support basic election infrastructure so that people could vote. These were not partisan donations. This was donations to local elected offices to make sure that the election could just run smoothly. And because the Republicans lost it, it’s infuriated them ever since. So this is a huge talking point on the right is that Mark Zuckerberg is an enemy of the people because he supported people being able to vote. So just wanted to say that real quick. Now, what I will also say, though, Kevin, is that Trump is right that one of the two primary beneficiaries of such a ban is Meta, and we’ve spent a long time now in this country worrying that Meta is too big and too powerful, and this would absolutely make Meta bigger and more powerful.
And the other one presumably is YouTube, right?
Is Google and YouTube. YouTube already the most used app among young people in the United States. And if you take away TikTok, you better believe that the average time they spend on YouTube is about to go up.
So one question that I have for you that I don’t the answer to is, do we know if Meta and Google, which owns YouTube, are doing any kind of lobbying around this bill? I remember several years ago, there were stories about how Meta had hired a GOP lobbying firm called Targeted Victory basically to try to convince lawmakers that TikTok was a unique danger to American teens. What are TikTok’s rivals doing this time around?
So I don’t have any specific knowledge of what they’re doing in this case, but for the exact reason that you just mentioned, I do assume that their lobbyists are in the ears of lawmakers saying, hey, this is the time to get rid of this thing. This thing is dangerous.
Meta is always scheming to eliminate its rivals whenever they can. This is a really juicy opportunity. Why else would you pay the lobbyists that they pay if you weren’t telling them to go hard after this?
Right. So let’s talk about the core argument here, that TikTok needs to be banned or sold because it is a threat to national security. And maybe a good way to do this would be for us just to outline what is the best possible case for banning TikTok, and then we’ll talk about the case against it. But let’s really try to steelman the worries that people have here.
What is the best possible argument you can imagine for banning TikTok?
I would say a few things. One is essentially a fairness argument. China does not allow US social networks to operate there, even though we allow their social networks to operate here. And I think that there is a question of essentially fair play. China gets this playground where if they wanted, they could push pro-China narratives using these big apps that they have built in the United States. The United States does not have the same opportunity inside China.
So that’s one thing I would say. The second thing is that I think that the data privacy argument is real. We have had Emily Baker White on this show. ByteDance used data about her TikTok account to surveil her and other journalists because they were worried about what she was reporting on about their company. So this question of could ByteDance use Americans’ data against them, it’s not abstract. It’s already happened. The company’s hands are not clean. How many Americans do you want that to happen to until you take action?
So those are two things that I would say. What do you think? What are reasons why you might want to ban ByteDance?
Well, one argument is just that we already have laws in this country that restrict the foreign ownership of important media properties. Like a Chinese company would not automatically be allowed to buy CNN or Fox News tomorrow if they wanted to. They would have to basically go through an approval process with the FCC because our laws limit the foreign ownership of those kinds of broadcast networks.
Rupert Murdock, in fact, basically had to become an American citizen before he could buy Fox News, because that was the law on the books then and the law on the books now. So in some cases, it is strange that we would allow a Chinese company, a company owned by an adversary of the United States, to own a very important broadcast medium in the United States.
We don’t allow it on TV. Why would we allow it on smartphones? So that’s one argument there. Another argument for banning TikTok is essentially that the existence of an app that is so popular with Americans that is controlled by an adversary of the US is not that it already has engaged in kind of sneaky attempts to sway American public opinion but just that it could. We’ve now seen just this week that when TikTok wants to, it can try to get a bunch of American young people to call their Congress people. That is a political influence campaign, and it’s one that TikTok itself was behind.
And you have to think, what could TikTok do in the upcoming election? What would it do in the case of a war between China and the US? If it can mobilize American citizens to oppose a TikTok ban, consider what it could do if, for example, China invaded Taiwan, what it could do if there was a war between a Chinese-backed state and the United States or its allies. There are so many ways that an app this powerful in the hands of an adversary could be a danger to US interests. And so while I do think that some of the more extreme arguments for banning TikTok on national security grounds don’t really register with me, for me, it’s more of like, well, what could happen in the future? How could this thing be used in a way that resists American interests?
Well, at the same time, Kevin, there are reasons why I think it would be bad to ban TikTok, and we should talk about those.
Yeah, so what are the most convincing reasons not to ban TikTok or to oppose this bill?
So one big reason is that you’re not addressing the root of the problem here. We don’t have data privacy laws in this country. If you’re worried that your data might be misused by TikTok, I guarantee you there are a lot of other companies that are actively misusing your data and profiting from it. In fact, we’re going to talk about that later in this very show.
So this issue goes far beyond TikTok, and I’m continually surprised that Americans aren’t more upset about all the ways that their data is being misused today. And my worry is that when we ban TikTok, Congress will essentially wash their hands of the issue, even though Americans are going to continue to be harmed actively by things that, at least when it comes to TikTok, are still mostly theoretical.
Yeah, the other argument against this bill that I’ve found compelling is one that organizations like the ACLU and the Electronic Frontier Foundation have been making. Both of those groups oppose this bill in part because they say that what’s happening on TikTok is First Amendment-protected speech, and that essentially by banning this app because you don’t like what’s being shown to people on it, you are not just punishing a foreign government, you are also punishing millions of Americans who are engaging in constitutionally-protected speech on this app.
And moreover, these organizations say this just gives a blooper print and a playbook and a vote of support to any authoritarian government around the world who wants to censor their own citizens’ speech on social media. If you are a dictator in some country and you don’t like what people are sharing on an app, you can now point to this bill and say, look, the US is banning social media apps because it deems them a threat to national security. We are going to ban an app that we don’t like as well.
Yeah, and I think that concern is particularly pointed given that it really does seem like a big factor motivating Congress here is that the content on TikTok is too pro-Palestinian for them. That really does seem to be one of the big reasons why this bill gained so much momentum so quickly is something about specific political speech. So I do think the courts will weigh in there.
I think there’s one other thing that is worth saying about why I think banning TikTok could be bad, which is that it takes the other biggest platforms in this country, and it makes them bigger and richer and more powerful. So Meta and Google and YouTube are the other platforms where all sorts of video is being uploaded every day. That is where more video is being consumed.
Instagram had more downloads than TikTok last year. YouTube is the most used app among young people in the United States. And when you get rid of TikTok, an app that has 170 million users a month, they are all just going to go spend more time on YouTube, more time on Instagram and other Meta properties.
So it’s going to be hugely beneficial to those companies. And before the TikTok controversy came along, Kevin, people like you and me spent most of our time worrying that Meta and Google were too rich and too powerful. So this is just something that worsens that problem even more.
Totally. And we know that this is not hypothetical as a result, because TikTok has actually been banned before in India. It was banned in India in 2020, and we saw that what happened after the TikTok ban went into effect, there were some little home homegrown Indian apps that popped up to capture some of the audience, but the vast, vast majority of users just started using Instagram Reels and YouTube instead. Those companies got bigger in India because TikTok was banned, and I think it’s fair to assume that the same thing would happen here. And for all kinds of reasons, you might not want that to happen if you’re a regulator.
Yeah. So look, we’ll see what happens here. I do think that this bill still has a long road ahead of it. Again, we have never passed a tech regulation in this country since the big tech-lash began in 2017. So if this happens, it would truly be unprecedented in the modern era.
But at the same time, that House bill moved faster than basically anything we’ve seen during that time when it came to regulating big tech, and so this is something we should keep our eyes on.
Right. So, Casey, weighing all of these arguments for and against banning TikTok, where do you come out of this? What is your preferred outcome here?
I have to say, and it makes me uncomfortable to say, but I do lean on the side of them banning it.
Really?
Yeah. Again, that fairness thing bothers me. The fact that we can’t have US social networks in China but they can have social networks here, there’s just an imbalance there. We have rules in this country around media ownership by foreign entities which you just described for us. I don’t understand why you would have those rules for broadcast networks and newspapers that arguably don’t even matter anymore and not have them for the internet, where maybe the majority of political discourse takes place now.
So this just feels like a moment where we need to update our threat models, update our understanding of how the media works and say, hey, it doesn’t actually make sense for there to be something like this in the United States. And I say that knowing that if Congress follows through, we are going to get rid of a lot of protected political speech. We are going to make Meta and YouTube bigger and more powerful in ways that make me totally uncomfortable.
So I hate the options that I have here, but if you were to make me pick one, that’s probably the one I would pick. But how about you?
Yeah, I think my preferred outcome here would be that ByteDance sells TikTok to an American company, to Microsoft. Or remember when Oracle and Walmart were going to team up to bid on TikTok back during the Trump days? Something like that I think would actually assuage a lot of my fears about TikTok as a covert propaganda app for the Chinese government, while at the same time allowing it to continue to exist.
If that doesn’t happen, I think I’m with you. I think I am more and more persuaded that banning TikTok would be a good idea, in part because of the reaction that we’ve seen from ByteDance and TikTok just over the past few weeks as this bill has made its way through Congress. We have not seen them engaging in good faith. We have seen them exaggerating, calling this a total ban. We’ve also seen pushback from ByteDance and presumably from the Chinese government too, which indicates to me that they do view TikTok as a strategic asset in the United States and that they do not want to give that up.
So for all those reasons, I was skeptical of a TikTok ban, and now I think I could get behind it.
Well, it sounds like in the meantime then, if there are any TikToks you love, might want to go ahead and save those to your camera roll.
Yeah.
[MUSIC PLAYING]
When we come back, palace intrigue finds its way to “Hard Fork” from a literal palace.
So today we have to talk about the biggest story on the internet this week, which is what is happening with Kate Middleton, also known as the Princess of Wales.
Specifically, what is happening with her photograph that she posted and the many questions it has raised about the fate of our shared reality?
Yes, so let’s just give a — if you’ve not been keeping up with this story, let’s just give a basic timeline of what’s been happening.
And truly everyone has been keeping up with this story, so make it snappy, Roose.
OK. So basically about two months ago, on January 17, Kensington Palace released a statement notifying the public for the first time that Princess Catherine had gone into the hospital for a planned abdominal surgery.
And Princess Catherine is Kate Middleton, because once you’re a princess, you get a bunch of new names.
Right. Technically, it’s Her Royal Highness the Princess of Wales was admitted to the hospital yesterday for a planned abdominal surgery. This statement comes as a surprise to people who watch the royal family. No one had said anything about her having abdominal surgery.
She had great abs.
Yes, and the royal family’s pretty withholding about personal details, so people roll with it. Then, a couple of days later, we get the start of the conspiratorial talk. A Spanish journalist named Concha Calleja, who has written a lot about the royal family over the years, is also something of a conspiracy theorist herself.
She wrote a book suggesting that Michael Jackson had been murdered, to give you a sense of where this person falls on the truth versus fiction spectrum.
Exactly. So not exactly Walter Cronkite, but this report is widely talked about. She reports that Kate Middleton was actually admitted to the hospital several weeks before Kensington Palace. Said she was, and that she wasn’t doing very well. About a week later, the same Spanish journalist suggests that actually the Princess of Wales is in a medically-induced coma.
Following this report, a spokesperson for Kensington Palace responds, basically saying this is all total nonsense. From what I understand, this is quite rare that a royal family spokesperson will comment on what are essentially internet rumors.
So the fact that even they denied it then maybe raised some suspicions.
Right. So then following this denial from Kensington Palace, there are a bunch of seemingly small things that just tip people more into the land of conspiracy theories. Prince William pulls out of a planned memorial service that he was going to go to at the last minute, claiming that it was a personal matter.
Then a few weeks later on March 4, paparazzi takes some grainy photos of the Princess of Wales with her mom, driving in a car. And people immediately start to think this isn’t Kate Middleton. This is a body double. People come up with all kinds of theories about why this is not actually the Princess of Wales. This is someone pretending to be the Princess of Wales. What has happened to the Princess of Wales?
So the suggestion is that this was essentially staged for the benefit of the paparazzi.
Exactly. And then just a couple of days ago, we got the biggest turn of events in this saga so far, which was that on Sunday, which is Mother’s Day in the UK, which — side note, I didn’t know that they had a different Mother’s Day than we have here.
Well, it’s because they have a different word. They call them mums.
That’s true. So on Mum’s Day Kensington, Palace released a photo of Princess Catherine with her kids. And it was signed with a C, which is what Princess Catherine does with all of her social media posts. And this photo was presumably intended to dispel these rumors and say, look, here she is looking happy with all of her kids surrounding her. Instead, this totally backfires because people start pointing out that this photo has been pretty obviously manipulated.
I mean, the forensic analysis that was immediately applied to this photo, I truly do not remember anything like it. And on one hand, yes, it’s obvious that people are going to be poring over this photo for any signs of strange things, but, man, did people do this in a hurry.
Yeah, it got the full Redditor treatment, this photo did. People noticed that the kids’ hands were oddly positioned. There was clearly some editing done on one of the daughter’s sleeves. Princess Catherine was not wearing any of her wedding rings, and there was one window pane that looked blurred. There was a zipper that was misaligned.
And following this uproar about this photo, the major photo wires that distribute photos to the news media from Kensington Palace issued what is known as a kill order.
They killed it!
They killed it!
This is like the equivalent of — in the old newspaper days, when you realized you were about to make a mistake, and so you’d run down to the printing presses, and you would say, stop the presses, right? This just does not happen all that often, Kevin, that we see one of these kill orders.
Yes, so basically a kill order is something that Getty or the AP or another news agency can issue to people who might use their photos, saying, do not use this photo anymore. In this case, these agencies said, it appears that this photo has been manipulated, and so we do not think you should use it anymore.
And this rarely happens. Mia Sato, who’s a reporter for “The Verge,” looked into of this and reported that someone at a wire service told her they could count on one hand the number of kill orders they issue in an entire year. So this is a big deal.
It is.
So shortly after this kill order came out, Kensington Palace released another statement, this one supposedly from Kate as well, also signed with a C. Quote, “like many amateur photographers, I do occasionally experiment with editing. I wanted to express my apologies for any confusion the family photograph we shared yesterday caused. I hope everyone celebrating had a very Happy Mother’s Day. C.”
So they also didn’t release any other photos or give the unedited version of the manipulated photo. And so this statement, it did not do a good job of placating the critics who believe that something more is going on.
No, this is a real raise-more-questions-than-it-answers moment. Because if she wanted to, she might have said at least one or two things about how she edited the photo. Or if there was any particular thing, oh, my daughter’s sweater didn’t look quite right, and so I wanted to see if I could fix that. Obviously, won’t make that mistake again. That was not what happened here.
Right. This was not a simple case of taking out some red eye or maybe using the blur tool to cover up a zit on your face or something like that.
Right, or trying to smooth out your skin or make you look younger, like you would edit one of your photos.
[LAUGHS]:: So we should say, to close the loop on the saga of the Princess of Wales, there are a lot of theories going around out there on social media about what has happened to the Princess of Wales.
And what’s the most irresponsible one?
[LAUGHS]:: Well, the one that I’ve seen going around that I think is the funniest was someone actually compared the timeline of her disappearance with the production schedule for “The Masked Singer” and speculated that she’s been hiding because she’s on “The Masked Singer.” I don’t think that is probably —
Gosh, I wish that were true.
— the real answer here.
I wish that were true.
But it’s none of our business where the Princess of Wales is.
Well, it is a taxpayer-funded position over there, right? So arguably, there is some public interest in how is the Princess of Wales doing.
Sure, but I just say it’s none of our business as the hosts of a technology podcasts.
Because we’re American citizens.
Exactly.
Yeah. What do we care?
And we fought a war to not have to care about the whereabouts of the royal family.
I would say we fought a war to only care about them when it was interesting.
OK.
You know what I mean?
So you may be wondering, why are we talking about this? This is just some spurious gossip about the royal family. Is this really a tech story? And, Casey, what is our answer to that?
Well, look, you’re right, Kevin, that, generally speaking, when is the last time that a member of the royal family was seen in public is not typically something that we’re interested in. But there were so many weird things about this photo that it actually did wind up squarely in our zone, because what do we often talk about here? We talk about media being manipulated. We talk about our shared sense of reality, how do we separate truth from fiction. And all of a sudden, a very frivolous story had raised, what I would say, are actually some pretty important questions.
Yeah. So the first thing that people surmised from this was that this may have been AI manipulation in some way, because it is 2024 and a lot of AI image manipulation is going on.
And it’s admittedly very funny to think that the palace was like, gosh, we need to put out a photo of Kate, and so just went into ChatGPT and was like, show us the Princess of Wales and her family smiling for a Mother’s Day photo.
Right. So it does not actually appear that this was due to AI. Obviously, AI image generators have well-documented problems. Sometimes they put extra fingers on your hand. Sometimes they make your eyes look weird.
Sometimes they put your hands on your fingers.
[LAUGHS]: Yes. But it seems pretty clear at this point, that this was not AI. In fact, people have been examining the metadata of this image and have concluded that it was shot in a Canon 5D Mark IV camera and that it was edited on Photoshop for Mac. So this is not a generative AI scandal, it appears.
But this actually is a really important piece of metadata, Kevin, because something that has happened over the past several years is that the question what is a photograph has gotten very complicated. Our friends over at “The Vergecast” talk about this a lot. Because when you take a photo with your smartphone, it’s taking many, many images at once. And then it is creating a composite out of them.
And so any image that you’re seeing in your phone’s camera roll these days, there are good chances that it’s not actually what the camera saw. It is a bit of a generative AI experience that you’re getting now with every single photo. So if the metadata had come back about the Kate Middleton photo, saying this was shot on an iPhone 15, in some ways this would be a more complicated question.
Yes. It’s not just that people can now easily edit photos on their smartphones. It’s that the actual cameras that are built into the smartphones often these days have AI manipulation built into them. So one example is the new Google Pixel phone has a feature called Best Take, where basically it takes a bunch of photos. Say you’re posing for a photo with your family, and in one millisecond when one photo is taken, someone is blinking. And the next millisecond, someone else is blinking or someone’s not smiling.
You can essentially have it take a bunch of photos and pick out the best versions of each person’s face and smush that all into one composite image. And that all happens without the user having to do anything proactive. That’s just the basic camera on the phone does that. We also know that there’s this whole field of what’s called computational photography, which is basically building algorithms and AI into the way that cameras actually capture images.
So for example, on the iPhone, if you use portrait mode, that portrait mode is using AI to do things like segmentation, to say this is part of the background, this should be blurry, this is part of the subject of the photo, that should be crisp and clear. And that is essentially a form of AI manipulation that is taking place inside the iPhone camera itself.
Yeah, all of this is just to say that there actually is a lot of AI manipulation going on these days in every photo that you’re taking with your iPhone. And of course, we think of this as generally benign because this is not inventing children that you don’t have. It’s not usually putting a smile on your face if there wasn’t one there, although if your eyes were closed, it will open your eyes for you.
So I just think that’s good to keep in mind as we move into this new era is that the images that we’re seeing, these are not the Polaroids that we were taking in elementary school, my friend.
Yeah. So I would say the biggest angle that got me interested in this story is just what it means for what people are calling the post-truth landscape. We’ve had lots of people writing their takes on this week, talking about how this is the canary in the coal mine for this new era of post-truth reality-making that we have entered into.
Charlie Warzel had a good piece in “The Atlantic” this week where he writes, quote, “for years, researchers and journalists have warned that deepfakes and generative AI tools may destroy any remaining shreds of shared reality. The royal portrait debacle illustrates that this era isn’t forthcoming — we’re living in it.”
So, Casey, do you think this portends anything different about our social media landscape or the way that we make or determine what’s true in this new era?
I think it’s definitely a step down that road, but at the same time, I think that if the worst comes to pass, we’ll actually look back, and we will be nostalgic for this moment, Kevin. Because this was a case where we could just look at the photo with our own eyes and know with total certainty that the image had been doctored to the point that the palace had to come out relatively soon afterwards and say, yeah, you caught us. Our expectation, I think, is that within a couple of years, the palace might be able to come up with a totally convincing image of the princess with her children.
And people who study AI maybe will be able to determine, OK, yeah, this was created with generative AI tools. But maybe they’ll say we actually can’t say one way or another. That is the truly scary moment. But is this a step on the road to get there? Absolutely.
Yeah. I mean, for me, the one thing that surprised me is just how quickly people jumped to skepticism when this photo was released. It feels like in the span of like 10 years, we’ve gone from “pics or it didn’t happen” to “pics and I’m going to study the pics to tell you why it didn’t happen.” It’s like the mere existence of photographic evidence is not enough to assuage people’s concerns about something being real or fake or not. In fact, in this case, putting this photo out just fueled the speculation more.
Absolutely. Now, one I think funny subplot here, Kevin, and I wonder if you have an opinion on this, and it is does, the Princess of Wales use Photoshop. Some people saw the statement and said, that’s absolutely ridiculous. If you’re the Princess of Wales, there’s no way you’re going to sit down and learn how to use Photoshop.
I can see it from the reverse, though. You’re cooped up in that palace all day. You have your ladies-in-waiting taking care of most of the household affairs. Maybe you shoot a few cute pictures of the kids, and you say, oh, I don’t like the way that my daughter’s sweater looks. I’m going to see if I can clean that up. So in a way, I find it totally plausible that the princess would learn how to use Photoshop for fun. What do you think?
Yeah, I think if you had told me that the king, who’s elderly —
King Charles.
Yes, King Charles was using Photoshop, I would have said, I’m going to need to see some more proof of that. But you know, Kate Middleton, she’s in her 40s. She’s a mom. Moms like to edit photos of their kids. Have I edited a photo of my kid ever to remove some crud from his shirt? Yeah, I’m guilty.
All right. So this is a coin flip we think, whether Kate Middleton knows how to use Photoshop.
Yeah, I can see it. I can also see reasons for skepticism. Another argument that I thought was interesting that I wanted to talk to you about today is something that Ryan Broderick wrote about in his newsletter Garbage Day in a post that was titled “Misinformation is Fun,” where he’s basically saying, look, we now know this happens all the time. Something comes out. People get upset or nervous about it. They accuse it of being fake. We get all these expert researchers and reporters coming out to fact check it and say, actually, this is fake or this isn’t true. But is basic thing is like, look, people are missing that this stuff is fun. It’s fun to speculate. It’s fun to spread rumors. It’s fun to try to connect the dots on some complicated conspiracy theory. This is a piece that people miss when they write about conspiracy theories, as both you and I have done over the years.
Yeah, and it’s important because all of these platforms that seek out, often for good reasons I think, to want to eliminate misinformation are fighting an uphill battle. And the uphill battle is, their users love this stuff. Their users want to spend time on their platforms arguing incessantly about the fate of Kate Middleton.
Right. And do you think that the platforms have a responsibility here? I mean, in this case, this was not a platform story. This photo was disseminated. I guess it was disseminated. It was put on Instagram and maybe other social media networks, but it was really the photo wires and the photo agencies stepping in and issuing this kill order that really turned the volume up on this story considerably.
So what do you think this says about who is responsible for gatekeeping here and telling whether an image is fake or not?
Well, the photo wires here are a great example of an institution that does still have some authority and does still have some trust. And those are becoming fewer and further between in this current world. So I’m very grateful that we have folks like that who can come in and say, oh, yeah, this is obviously doctored. Get it the heck out of there.
There will probably be examples, like in our election for example, where that just is not the case, and there is no authority that can come in and say definitively one way or another this was doctored or not.
Yeah. And I just think this whole discussion about doctored imagery is going to get so much harder as more and more cameras just come by default with AI tools installed in them. So five years from now, is it even going to be possible to take a, quote, unquote, “real photo” or is every camera and smartphone on the market going to have some kind of AI image processing or improvement built into it?
I’ve actually hired an oil painter just to create my likeness. Because it’s the only way that I can trust that I’m seeing my own face, Kevin.
I like that.
Yeah.
So, Casey, I remember a few years ago, when I was doing a lot of reporting on crypto and blockchain projects, one of the things that people would pitch to me periodically in this space is here’s a way to use the blockchain to keep a kind of uneditable version of the metadata to tell the provenance of an image so that you can have a record on the blockchain that says, this image is real. It was not doctored or manipulated in any way, and here’s how anyone can go prove it.
So does this scandal and the associated drama make you think that something like that is actually necessary?
So, look, I don’t like solutions that are on the blockchain. I’m not going to say that no one could ever come up with a way to do that would be fast and efficient and worthwhile. I don’t think it’s possible to do that today. But there are initiatives to try to verify the authenticity of images on the internet.
So there’s something called the Coalition for Content Provenance and Authenticity. This is a consortium of a bunch of tech companies, including Adobe, Google, Intel, Microsoft, and they are trying to come up with some kind of standard so that you can embed in your photo the idea that this image was taken with a camera, and it was not just spat out by an AI generator. Now, even in a world where that exists, people are still going to share these photos on social media. They’re still going to have endless debates. But it does empower gatekeepers. If there are some image that for whatever reason is playing a role in an election and on Meta or YouTube or maybe even a TikTok, if that still exists, if they can look at the metadata and say, oh, yeah, this was just obviously created with generative AI, maybe then they’re able to attach a warning label to it. Maybe then they’re able to fact check it. And that’s really useful, right? Newspapers, other journalistic outlets will be able to do the same thing.
So it’s still a little bit tricky. Can you actually come up with a metadata standard that isn’t easily removed from the image? There’s stuff to be figured out. But if you want to know how do I think we will solve this problem, it’s going to look something like that.
Yeah. So a lot of people are saying this incident has told them or informed them that we are headed into this post-truth dystopia. I actually took a different lesson from it. This whole thing has made me more optimistic because it has showed that people actually do care what’s true. People actually do want the stuff that they are relying on that’s in their social media feeds.
They do actually care whether or not it’s realistic or that it represents reality. And they are willing to go to extravagant lengths, including picking apart pixel by pixel photographs of the royal family to determine whether what they are looking at is real or not.
I think that’s a really smart point, because I do think that there is a defeatism that creeps into these discussions. Oh, we’re going to have an info apocalypse, and we’ll never know what’s real anymore. But I think what you said is exactly right, that we have a profound need to know what is true and false, and people are clearly ready to volunteer a significant part of every day to figuring out what is true or false if the story is important enough.
Yeah, I think we should have a coalition of amateur sleuths. Instead of picking apart these photos, maybe that’s a good use of their time for one week, but these people clearly have time on their hands. They clearly have expertise in digital sleuthing. Let’s put them to work doing something more socially beneficial.
Absolutely.
Have them solve some cold cases.
Yeah, take a lesson from “Encyclopedia Brown,” “Nancy Drew,” “The Hardy Boys,” basically everything I read when I was nine. Those kids were on to something.
[MUSIC PLAYING]
When we come back, why your car is snitching on you.
It’s driving me crazy.
[LAUGHS AND SNORTS]: I’ll allow it!
[MUSIC PLAYING]
Kevin, this podcast needs an infusion of cold hard cash, Kashmir Hill.
Yes, today we’re talking with my colleague, Kashmir Hill, who writes about technology and privacy for “The New York Times.” She’s got a new story out, and it’s a banger.
This is one that really caught people’s attention, and for good reason. Because the more of this story that you read, the higher your blood pressure goes.
It’s true. This is a story about cars and all of the data that cars collect about their users and drivers and how cars have become a privacy nightmare. It’s a really good story. It’s about some of these new programs that car companies have installed in their cars that allow them to remotely collect data, and then not just keep that data for themselves but actually sell that to places like insurance companies, which can use it to say, well, Casey is a very bad driver. He braked 72 times yesterday for some reason, so we’re going to raise his premiums.
Oh, yeah, that famous sign of being a bad driver — braking.
[LAUGHS]: The broader point is that cars are now basically smartphones on wheels.
They’re snitches on wheels is what they are.
And they are being used to keep tabs on the people who drive them increasingly and with all kinds of consequences for consumers.
Yeah, you truly may have been roped into one of these schemes without even knowing it. And so if you have your car connected to the internet in any way and you haven’t yet read Kash’s story, I promise you, this is one you’re going to want to listen to.
And when Kash started looking into this, she learned about this whole hidden world of shady data brokers and companies that are selling your data from your car to insurance companies. And today, we wanted to talk to Kash about what she found out in her reporting and what she thinks is going to happen next, if there’s any hope for us in this new world of connected cars or if we’re all just destined to be surveilled and snooped on by these things that we drive around. So today, we’re turning “Hard Fork” into “Carred Fork.”
That doesn’t work at all.
[LAUGHS]:
Even a little bit. God.
[MUSIC PLAYING]
Kash Hill, welcome back to “Hard Fork.”
Thank you. It’s wonderful to be on this award-winning podcast.
Thank you.
Thank you. We did win an award.
Didn’t you guys win the Oscar for Best Technology Podcast earlier this week?
Is the iHeart podcast award the Oscar of podcast awards?
Many, many people are saying it.
Is in that case —
Yeah, it was us versus “Oppenheimer.”
Congratulations.
So, Kash, let’s talk about this story. When did you decide to write about data collection in cars and why?
So I was spending a lot of time lurking on online car forums, forums for people who drive Corvettes and Camaros and Chevy Bolts, which I drive. And I started to see people saying that their insurance had gone up. And when they asked why, they were told to pull their LexisNexis consumer disclosure file.
LexisNexis is this big data broker, and they have a division called risk solutions that profiles people’s risk. And when they did that, they would get these files from LexisNexis that had hundreds of pages, including every trip that these people had taken in their cars over the previous six months, including how many miles they drove, when the trip started, when it ended, how many times they hit the brakes too hard, accelerated rapidly and sped.
And when they looked at how LexisNexis had gotten the data, it said the provider was General Motors, the company that manufactured their cars.
Right, so your story starts with this anecdote about this man named Ken Doll, which, by the way, great name. And he is a 65-year-old Chevy Bolt driver, and like you, he owns a Chevy Bolt, or I guess he drives a leased Chevy Bolt. And his car insurance went up by 21 percent in 2022, and he was like, what the heck? Why are my premiums going up? I’ve never been responsible for a car accident.
He goes looking, and he asks for his LexisNexis report and gets back a 258-page document detailing basically his entire driving history. So does he then make the conclusion that this is why his premiums have gone up, because he’s a bad driver?
Well, he says he’s a very safe driver. He says his wife is a little bit more aggressive than him.
Sure, blame the spouse.
Like the story. Blame Barbie.
And she also drives his car. And yeah, he said that the trips that she took, during the weekdays , he said, when he doesn’t usually use the car, had a few more hard accelerations, hard brakes. And yeah, it looked to him like this is why his insurance went up.
And we should say, just because you accelerate hard in a car doesn’t necessarily mean that you did anything wrong. And if you had to brake really hard, that also might not have been your fault. And so I think one of the things that’s infuriating, Kash, reading your story, is that this data, which lacks a lot of really important context, is being hoovered up, often without the knowledge of the people involved, and then being used to gouge them on price.
But I really was just struck by the way that these insurance companies were so eager to use data that might not actually be incriminating.
Right. And LexisNexis said they don’t actually give the trip data to insurance companies. They give a score that LexisNexis gives the driver, a driver score based on that data, and that that’s what they’re sharing. But interestingly, they didn’t give Ken Doll his score, so he doesn’t actually know what his score is.
But that’s such a corporate thing to say is like, oh, don’t worry. We’re not giving the individual data. We’ve created a mysterious impenetrable black box and handed that to the insurance companies. But just trust us. It’s actually really well done.
Well, the other company, Verisk, did say we just give all the trip data and a score.
So let’s talk about how widespread is this? Which cars are sending data about their drivers to insurance companies? Which companies are involved in this? Is this an industry-wide practice or is this just GM and LexisNexis? Is it pretty contained?
So if your car has connected services, like if you have a GM car and you have an OnStar or a Subaru and you have on Starlink, your car is sending data about how you use it back to the auto manufacturers. At this point, the only ones that I know of who are providing it to insurance companies are GM, Kia, Honda, Hyundai, Mitsubishi. These are companies that — Subaru being an exception. Subaru says they only give odometer data to LexisNexis. But the other companies all have in their apps now this driver-scoring driver feedback.
With GM, it’s called Smart Driver. And if you turn it on, they give you feedback about your driving. Like, drive slower, be gentle with the accelerator, buckle your seat belt.
Drive faster. Take more risks!
Change the music!
Get out — get out of the carpool lane. I’m trying to pass you.
Stop looking at your phone, which is giving you all this feedback.
Yeah.
But for people that turn this on, they may not have realized it, but they were saying, yes. A lot of these programs are actually kind of run by Verisk or by LexisNexis. They’re the ones giving you the feedback, not the automaker. And so you’re just sharing it with them.
This was not well disclosed. In the case of GM, it was not evident at all from any of the language. And a lot of people said that Smart Driver was turned on for their cars, and they didn’t turn it on. They didn’t even know what it was. And it does —
GM gives bonuses to salespeople at dealerships who get people to turn on OnStar, including Smart Driver. So they may have been enrolled by the salesmen when they bought their car. But for other people, if you turn this on, you’re sharing your data. And when you go out shopping for car insurance and you’re trying to get quotes, a lot of the insurance companies will say, can we have permission to get third party reports on you, like your credit file?
And when you say yes to that, releases all of that data to go over to the insurance company, and this is just — people did not realize this was happening.
And that detail about salespeople being incentivized to enroll people, often without even fully informing people of what they’re enrolling in, I think, is really important. Because at the end of the day, this product exists because it is essentially free money for GM and these other car manufacturers. Like I think in your story, you say they’re making millions of dollars a year by selling this data. And is this really anything more than a cash grab?
I mean, the car companies say that this is about safety, that they’re trying to help people be safer drivers with this driving coach. But for some of these people that had Smart Driver turned on, they didn’t even know it was on. They’re not getting the feedback. And as you say, General Motors, my understanding is they make in the low millions of dollars per year with this program, which they described to Senator Edward Markey. He asked them, are you selling data? And they said, the data that we sell, that we commercially benefit from is de minimis to our 2022 revenue.
So for them, it’s nothing. This —
It’s a drop in the bucket.
It’s a drop in the bucket.
This is not moving their overall finances.
But it’s not de minimis for the people who have to pay 20 percent more on their car insurance all of a sudden for reasons that they don’t even understand.
Right. So this is — I think what we’re seeing is the surveillance capitalism model. The Google, the Facebook, you get something for free, you’re paying with your data, it’s really spreading to all of these other companies. And the automakers are like, well, we’re getting all this data. We can monetize this too. Right now they’re not actually making that much money. Low millions is small. And some of the automakers told me, we don’t get paid for sharing the data. We only get paid when an insurance company buys it. So they don’t even have a good business deal on this. They should be getting money for all the data.
There is something really damning about saying to the senator. It’s like, hey, we don’t even make all that much money off this. OK, well, then why are you violating the privacy of your entire user base if you can’t even get a good price for it?
Yeah. One question I have for you, Kash, is how many other uses are car companies finding for this data? Are they just selling it to insurance companies to raise people’s premiums? I mean, I can imagine a situation where a company might like to know which drivers are driving past their store every day so they can show them targeted ads on social media. How many buyers are there for this kind of car data?
I mean, look, there’s a lot of information that’s flowing out of your car and a lot of potential buyers. At this point, what I mainly have been focusing on is this insurance thing. And when it comes to the insurance data, the one thing that all the automakers pointed out is that they’re not providing location details. That it’s just when you started the trip, when it ended, how far you drove, and it doesn’t actually include the location data. But that’s not to say the companies don’t have it. They’re just, in this case, not selling that well.
So this is, in some ways, not a new phenomenon. I mean. Insurance premiums are — they vary based on things like where you live and how new your car is and are you a young man who is statistically more likely than someone who’s older to be in an accident. Those kinds of things are used to change the prices of insurance premiums all the time. And I guess from the insurance company’s perspective, this is just one more piece of data that they can use to make decisions about how much of a risk someone is out on the road.
Did you hear any principled defenses of this while you were reporting this story from the insurance companies or the companies that sell data to them?
What the automakers really focus on is that they set up these programs to help people get discounts.
God!
So some of the programs, like Honda’s for example, if you turn on driver feedback and then you have a good score, the actual app will offer to connect you with some company who’s going to give you a 20 percent discount. So they’re really focusing on, we’re trying to help our customers and get them discounts.
What they’re not talking about is when that data is flowing out. And it’s hurting their customers. I talked to this one Cadillac driver, who lives in Palm Beach, Florida. And in December, it was time for him to get new insurance, and he got rejected by seven different companies. And he was like, what is going on?
They just wouldn’t sell him insurance for any price.
They would not cover him, and his auto insurance was about to expire. And he said, what is going on? He orders his LexisNexis report, and he has six months of driving data in there, and he says — he says, look, I don’t consider myself an aggressive driver. I’m safe, but he’s like, yeah, I like to have fun in my car, and I brake a lot, and I accelerate. My passenger’s head isn’t hitting the dashboard or anything like that. But, yeah, I speed.
Can it tell whether you’re doing donuts in the parking lot? Because I actually would like to know that.
But he says, look, I’ve never been in an accident, and I couldn’t get insurance. He had to go to a private broker and ended up paying double what he was paying before for insurance. So it really, in that case, hurt him a lot.
So here’s where, I guess, I can maybe feign some sort of sympathy for the idea of doing this, which is like I do want worse drivers to have higher insurance premiums. I think that is how we want the insurance market to work. I think if you’re a good driver, your insurance should be lower. And the best way to know who is a good driver and who is a bad driver is to monitor them obsessively.
But what you have revealed here, Kash, is that once we implemented this sort of surveillance system, it seemed to do what all surveillance systems do, which is needlessly penalize innocent people. So we have all of the downsides of a surveillance system with really none of the upsides.
Yeah, I too want safer roads, Casey. I get annoyed at aggressive drivers. And I talked to this one law professor from the University of Chicago. And he said usage-based insurance — that’s what you call this, when you tell an insurance company they can watch you, they can see your driving.
He said it works. He said the impact on safety is enormous and that people drive better when they know that they’re being monitored and that they’re going to pay more if they drive aggressively or unsafely. But that’s not what was happening here. People were being secretly monitored, and then they’re paying more and they don’t know why. And that is not going to make the roads any safer.
Yeah, that does feel like the stickiest part of this to me is the disclosure piece. If you know — I’ve had experiences in the past couple of years where I’ll go rent a car if I’m on a work trip or something. And part of what I know when I’m renting the car is that the rental car company is tracking that car. And I know this because they tell you when you sign up, and it’s very clearly disclosed — we will track this car. If it gets stolen or something, we can help you track it down, that kind of thing.
So I know that I’m being monitored while I’m driving a rental car, and so I do tend to drive a little bit more conservatively in a rental car. I can imagine that expanding to lots of other cars, but the people have to know that they’re being monitored in order to be able to drive safer as a result of being monitored.
Absolutely.
So, Kash, talk about your reporting a little bit on this. So you started looking through these car forums. You started seeing evidence that people were having their premiums raised as a result of this surveillance by their cars. When you approached the car companies, the data brokers, the insurance companies, did they try to deny what was going on? Were they pretty open about it? How did they react?
I thought that — I was expecting denials. I was expecting that, yeah, they would say this wasn’t happening because it just seemed so shocking to me that they would be doing this. But they ended up confirming it, but there was some evasive language about how it worked.
One of the big things I was asking different companies is, where do you disclose this is happening? And with GM, the spokeswoman said it’s in the OnStar privacy policy, in the section called —
Which everyone reads before they click Accept, in its entirety.
In this section about sharing data with third parties. And so I go and read that section, and the section doesn’t say anything about LexisNexis or Verisk or telematics, which is what you call this driving data. It says if they have a business deal with somebody like SiriusXM, which is the company they name there, SiriusXM is going to get some data from your car. And I just was very shocked that there was nothing more explicit anywhere. And I actually — I told you I have a Chevy Bolt. So I went to the My Chevrolet app. I connected my car to the My Chevrolet app and went through the Smart Driver enrollment. And all it says is, get digital badges. You can get Brake Genius and Limit Hero.
Brake Genius! One of my favorite bands from the last year.
I’m putting that on my LinkedIn profile, certified Brake Genius
Get driving tips. And there’s just absolutely nothing that would make you realize that as soon as you turn Smart Driver on, that General Motors is going to start sharing everything about how I drive my Bolt with LexisNexis and Verisk and whoever else I didn’t find out about in my reporting.
It should just show you the splash screen of a panopticon, and it should say, is this the future you want? Just tap Yes to continue.
Well, I really do think every company wants this model now. They’re just thinking about how can I get an extra revenue stream through monetizing the data of my customers. And this is not just automakers. This is just anything we’re buying now that’s internet connected.
I mean, what it made me think of when I read your story was TVs. Because a very similar scenario has been happening with smart TVs, which collect all kinds of data about what people watch on them, and then they can sell that data to advertisers. So it is actually, in some cases — and I bought a new TV a few years ago, and I went through this process of realizing that it is actually cheaper to buy a smart TV than a non-smart TV in many cases, because part of how the smart TV makers are making money is not through selling you the hardware. It’s actually through capturing the data and selling the data.
So we do sort of have this phenomenon where, as hardware, any hardware, whether it’s a car or a TV or a refrigerator or a smart toaster or something, as it becomes more connected and more like a device in its own right, the data actually in some cases becomes more valuable than the actual piece of hardware.
I mean, Kash, don’t we just see this all the time, that privacy is just increasingly a luxury good for rich people to pay for?
[SIGHS]: Yeah. I mean, I guess so, but even rich people, I mean they’re buying expensive cars, and their cars are still sending data back about them. I mean, that’s one objection I saw from drivers, like people at General Motors. They said, hey, I paid a ton of money for this car. If you’re going to sell my data, I want a cut of it.
Yeah.
Here’s how I would solve this problem. I think that each manufacturer should be allowed to make one car that sends all your data to everywhere, and there’s nothing you can do about it. You can just choose. So if you buy a GM Snitch, you know that that’s what’s going to happen. And it should cost $100.
It should be the cheapest car on the market.
It should cost $100, my little GM Snitch. It has a direct line to the police whenever I cross over the center divider. And other than that, knock it off.
Yeah. Kash, what has the response been to your story? Are lawmakers outraged?
I’m outraged.
Are drivers sending you stories about being spied on? What has the reaction been?
So I’m definitely hearing from lots of other drivers who are discovering that they had some of these features turned on. They didn’t know it, and they’re turning it off. I did a news you can use box at the bottom of the story. And I said, here’s how to figure out if this is what your car is doing. And one of those was, there’s this website called the Vehicle Privacy Report that you can go to, and it’ll tell you. You put in your VIN number, and it tells you what your car is capable of collecting.
So the person who runs that site said, I’ve had tens of thousands of people come and do it off of your story. I included the link to LexisNexis to go request your consumer disclosure file. And not just for auto data. That file is crazy. It had me associated — I mean, it had tons of pages for me. Had me associated with my sister’s email address from middle school in the 1990s. I was like, why? Why?
So I think everyone should request that, request their Verisk file. And I talked to Senator Edward Markey for the story, and he’s been very interested in what data is being collected by cars and what automakers are doing with it. And he said, when I described to him what GM had done, he said this sounds like a violation of the law that protects us from unfair and deceptive business practices. So I’m sure there’s going to be more to come from this story.
Yeah, and what can drivers do? If they are worried that their car is snooping on them and sending data to a data broker or to their insurance company to raise their premiums, what should they actually do to prevent that? Or are there certain car makers who are not collecting this kind of data? What can the average driver do?
I mean, there are — I can tell you from my time in the car forums, I mean, there are some people that don’t want their data going out from their cars, so they hack it. Basically, they turn off the connected services. They make sure that data can’t leave their car.
I mean, if you sign up for connected services, you are connecting your car back to the auto manufacturer’s cloud servers or whatever. It’s sending data. So just turning that on means that data is getting sent back, and that’s why a lot of these companies, when you buy their car, they’re like, oh, you get this for 30 days for free. And so most people turn it on. And then even if you don’t pay, you’re still connected after that. So.
Wow.
Wait. So even — so they get you to connect it, and then your free trial runs out, but they still keep collecting the data about you that they can sell?
That’s my understanding. And that’s what you agreed to when you read the 50,000 word privacy policy.
Good Lord.
Wow. See, I would at least like my car’s surveillance data to be helpful to me in some way. I would like it to pop up a little notification and say, this is the third time you’ve driven through McDonald’s in the past week. Are you OK? Is something going on in your life? Do you need therapy? Last question, Kash. Do you think this will create a bull market for used cars that don’t have any of this stuff in it? Are we going to see people running out to the car lots to buy the 1985 Ford Bronco that doesn’t have any technology in it?
I mean, this was the basic premise of the “Battlestar Galactica” reboot, by the way, is that the only spaceship that survived was the one that was not connected to the space internet. And so when the AI Cylons attacked, only Battlestar Galactica was safe.
Wow.
That’s true. And I have seen a lot of people commenting in that way. They’re like, oh, I’m so glad I still have a car from 2009. If you’ve got a CD player in your car, it is privacy protected.
Yeah, I am going to go back to the “Flintstones” car that you have to pedal with your feet. I don’t think that was collecting much data on its drivers. All right. Kash Hill, thanks so much for joining us.
Thanks, Kash.
My pleasure.
I’m all worked up now.
Wait. do you have a car, Casey?
No, I don’t even have a car.
I just —
Strong feelings for unapparent reasons.
[MUSIC PLAYING]
“Hard Fork” is produced by Davis Land and Rachel Cohn. We’re edited by Jen Poyant. Today’s show was engineered by Alyssa Moxley. Original music by Elisheba Ittoop, Marion Lozano, and Dan Powell. Our audience editor is Nell Gallogly. Video production by Ryan Manning and Dylan Bergersen. Go check out what we’re doing on YouTube. You can find us at youtube.com/hardfork. Thanks to Paula Szuchman, Pui-Wing Tam, Kate LoPresti, and Jeffrey Miranda. You can email us at hardfork@nytimes.com. Especially if you know where that princess is.
Yeah, please tell us.
[MUSIC PLAYING]
[ad_2]
Source link