Elon Musk isn’t sure about artificial intelligence.
He worries. He thinks it’s . He .
So when Google presented its new AI-based camera Google Clips last week, you might imagine that Musk wouldn’t be overly excited.
On Saturday, however, he seemed to accuse Google of a blatant disregard for privacy.
Musk took to Twitter and referenced a video of Clips posted by the Verge. “This doesn’t even *seem* innocent,” he tweeted.
Clips, you see, works by using AI to instantly recognize faces of special interest to its owner and, when it spots those faces, takes candid pictures of them.
Without the face-owners necessarily knowing.
An LED light does flash to say the camera is on. There are, however, lots of those around the house that we happily ignore.
Google declined to comment specifically on Musk’s tweet.
A company spokeswoman did tell me, however, that Clips is “a camera and made to be used intentionally to capture more moments — 7-second clips — of the people that are important to you.”
She added: “All of the machine learning happens on the camera and Clips does not connect to the internet to transfer content. And, just like any point-and-shoot, nothing leaves the camera until you decide to save it and share it.”
Some might say, though, that it suffers from the same photographing-people-when-they-don’t-know-you’re-doing-it thing that made.
Perhaps it’s the vivid memory of Glassholes that encourages the company to discourage anyone from clipping it to themselves. Google claims that the camera has to be stationary to achieve its best effects.
Musk’s overt criticism is unlikely to amuse Google. He has a dedicated following.
Some might say, though, that the problem with Clips is a problem with much of technological progress.
Why, during Hurricane Irma, Tesla remotely boosted the battery life of some cars at their owners’ request.
If it can do that, couldn’t it stop cars remotely too?
At heart, you either trust the company you’re dealing with or you don’t. And Silicon Valley companies aren’t currently.