Thursday, June 21, 2018

Spotify, Steam, and YouTube: Curation Failure

It is 2018, and the world seems to be moving backwards in terms of speech. Despite, or perhaps because of, a continuing coarsening and dumbing-down of the culture, people seem more likely than ever to take offense at language, media, and even behavior that either would have been unremarkable a few years earlier or seems allowed or uncontroversial in other contexts.

The consequences for saying the wrong thing? Economic exile. You lose your job, and may not be able to get another one. There are countless examples, from Roseanne Barr's recent horrific Twitter eruption to Samantha Bee's vulgarity to your Facebook feed being filled with people demanding that posters or people in videos be identified so their employers can be pressured into firing them.

We seem to want speech (if not thought) to be aligned with a kind of corporate mindset, Get in formation. Say the right things (silence is complicity). Non-compliance is not tolerated. 

It is not surprising then, that media companies are rushing to check themselves before the searchlight and Internet Outrage Cannon is trained on them.

Spotify: Having it both ways

Spotify floated a trial of policies around "hateful content" and "hateful conduct". It quickly walked it back after strong objections and threats of content takedowns from artists.

Spotify had stated they would remove, or at least not promote, music that...
expressly and principally promotes, advocates, or incites hatred or violence against a group or individual based on characteristics, including, race, religion, gender identity, sex, ethnicity, nationality, sexual orientation, veteran status, or disability.
I have to assume Spotify's intent was to have a way to kick things like Nazi punk (an actual genre) or ISIS-core (hopefully not an actual genre) off the platform. But Spotify is a global company, and is trying to keep all the world's music up on its service. It has over 50 million songs in its database, and exercises no editorial control over submitted content. 

And unfortunately for Spotify, there's a lot of music that could fall under their stated policy. Hip-hop -- one of Spotify's most popular genres -- has a number of acts (especially some of the older stuff) who have lyrics that are anti-gay, anti-Semitic, and/or endorse violence against women. There are plenty of rock acts, too. 

Spotify went a step further. They said they'd even block or at least refuse to promote (non-offensive) content from people who had behaved badly -- so-called "hateful conduct". Their test case here was R. Kelly. Presumably they were trying to give themselves a way to pull down content by whoever is currently screwing up so that Spotify could avoid any kind of boycott.

But this policy also quickly becomes difficult to operate with any precision. Is Spotify going to decide what crimes merit banishment? Any crime? Felonies only, or misdemeanors? What about things that are crimes in one country but not another? What about content that was made well before the act did anything objectionable? (a good example here would be Bill Cosby, whose early comedy albums are squeaky-clean and considered landmark comedic works). What about allegations, rather than convictions?

It's worth noting that R. Kelly has never been convicted of any crime. In fact, he was tried and found not guilty, so as far as the government is concerned, he's clean. 

Or what about content where some of the people have done something wrong, but others have not (like producer Dr. Luke and Kesha). Dr. Luke has not been found guilty of anything, with some lawsuits dismissed and others in progress. But apparently that doesn't matter. Even if you personally had a good experience with him and say so, you and anyone associated with you might get into trouble. Do you pull down the records he did with Kesha? Doesn't that hurt her more than him?

There are plenty of artists who have confessed to or endorsed all sorts of bad conduct in interviews. They have boasted about treating women poorly. Frequently abused drugs and alcohol. Driven faster than the speed limit. Driven while intoxicated, and/or killed people in drunk driving accidents. Committed crimes ranging from selling drugs to gun violence to murder. And if you want to include artists who have just been accused of bad behavior, well, that list gets long fast.

Spotify is also trying to have it both ways with their half-step of "well, we won't take the content out of Spotify, but we will refuse to promote it on the homepage or in our editorial playlists". Really, Spotify? So you feel bad about it, but not so bad that people shouldn't be able to get it? That seems like a rather weak approach, calculated for some marketing value only.

Let's be clear: if you think the content is objectionable, either due to what the content promotes or the alleged actions of the creators, why would you make it available at all? By doing so, you are putting money in the creators' pockets, and (at least by the logic of our current era) thereby endorsing this behavior. You are complicit. You are aiding and abetting.

At least Spotify is trying to exercise responsibility for what they offer, albeit in a clumsy and conflicted way.

Steam: Caveat emptor

Steam, the iTunes Music Store of video games, similarly ratcheted up a policy of "no pornography" to include material it had not previously covered, and then apparently, backed off, before creeping back to cover some of it again. Or not. It is hard to tell, and the inconsistency is part of the problem (the blurry guidelines are the other).

In Steam's case, they ran into a bunch of complaints from the LGBTQ gamer community, who claimed the material in question (so-called "visual novels") were important to them, as a safe space to explore their issues, as well as being the genre and medium that catered to them.

Steam's hand-wringing seemed particularly hypocritical in that they targeted sexual content, and yet have zero problems with the casual, extreme, graphic violence that is commonplace among many video games.

Then, Steam threw in the towel. After analyzing the problem, they have simply decided that, rather than curate what is sold in their store, they'd rather just "enable" developers to have the freedom to put out anything, as long as it is not, in Valve's opinion (emphasis added) "illegal or straight up trolling".

Let us ignore the unfortunate use of ambiguous slang for the moment (one wonders if "trolling on the down low" would be OK, or if we have a mutually agreed-upon definition of "trolling" that navigates parody, commentary, and so forth). Let us also ignore Valve's ability to judge whether or not something is "illegal".

The real disappointment here is that the world's biggest game store (and potentially, soon, the biggest software store) doesn't care what it is selling, as long as you are buying. They have abdicated any responsibility for what is in the store, leaving themselves a flimsy back door to dump content when there is a PR incident.

They do not have to spend any time or money looking at their own merchandise. They can dodge blame (just like the gun industry) and say "Hey, we didn't make it, we just sell it. It may or may not reflect our 'values'. Don't buy it if you don't want it."

It seems only a matter of time before this blows up in their faces. But more importantly, it suggests that they feel their massive market dominance as a platform carries with it no responsibility whatsoever. Or perhaps, worse, they feel their influence obliges them to do nothing in the name of "freedom" for creators. So they provide a means for people to monetize hateful, sloppy garbage. Caveat emptor.

Fortunately, the gaming press seems to think Valve is making a bad call here.

YouTube: A monster machine

Steam's situation leads one to reflect that perhaps we should not be offering a megaphone and platform for anyone and everyone. YouTube all but confirms it.

YouTube has been wrestling with issues similar to Steam. YouTube has always taken a view that it is at its best when exercising zero editorial control over what people are posting. So it doesn't. No human being looks at the content being submitted, other than the submitter. There are some rudimentary tests, but not for anything like "values", it's simply to make sure that whatever horrible video being posted doesn't infringe the copyrights of the big media players. There's your "values".

So people go to work, and quickly learn that the best content is the "worst" content. Things that are deliberately shocking, outrageous, and not the kind of thing you would find on any 20th century "network". Due to the nature of internet platforms and our own human nature, we have ended up with "creators" like Logan Paul and Lil Tay. Perhaps Penny Arcade said it best:

They made a kind of monster machine, with every possible lever thrown towards a caustic narcissism, and then they pretend to be fucking surprised when an unbroken stream of monsters emerge.

YouTube has allowed people to blast their most awful "thoughts" and actions worldwide, and have them preserved forever. Great job, gang. You made the world a little or a lot more terrible, and you are making money from it. (And that is to say nothing of the endless stream of copyright violations that also fund YouTube's business).

Perhaps unsurprisingly, there is some pushback. A recent example is that just last week, London police have asked YouTube to take down some music videos, because the police feel they are glorifying or encouraging knife crimes. While there might not be any evidence the videos are contributing to knife crime per se, it is obvious they are amplifying the culture in which it thrives, and that at least some of the crimes have been inspired by, if not predicted by, the communications happening in the videos.

I am a big supporter of freedom in the arts. But it seems hard to defend "art" where the singer says (roughly) "Tom, that's my friend Jimmy in the back and he's going to stab you until you are dead", and then Tom is found dead from stabbing and Jimmy is found holding a bloody knife and says "yeah, I stabbed Tom until he was dead."

That seems more like terrorism, and not all that different from Al Qaeda or ISIS ranting about killing non-believers. (It doesn't help that the music is really uninteresting).

Like Steam, while simulated violence is totally fine for YouTube, anything approaching simulated sex is not. I guess you have to pay for HBO or Cinemax (if you want it fancy and artistic) or just go looking on Tumblr or PornTube (if you want it free and real).

Unlike Steam and Spotify, YouTube has never indicated it would try to police any of its content, and is only reluctantly doing so in the face of legal pressure. And for the first time, YouTube is also facing challenges to the "safe harbor" laws that allow it to ignore the content it hosts.

Minding The Store

Prior to the internet's creation of stores with infinite shelf space, people who sold physical goods had to make choices about what to carry. Every item in the store occupied space that something else could be using. Stores chose based on what would sell or attract people to the store. They also made choices about what kind of institution they wanted to be, and what kind of customers they wanted to have.

Jeff Bezos famously remarked that his biggest mistake was branding early Amazon as "The World's Biggest Bookstore" instead of "The World's BEST Bookstore". He felt the promise of having every book was an expensive distraction, especially when very few books account for the majority of sales. It is the same in every other media vertical.

Perhaps the solution is for the internet's virtual vendors to shoulder some more responsibility and actually choose what goes up. Maybe the world doesn't need every single song, game, or video available in the biggest stores. Niche tastes can look in niche places. It is not difficult to find things on the internet. All that extra junk is not really driving revenue for any of these businesses, and it seems like it just adds risk and gives voice and legitimacy to some questionable ideas.


Of course, all of this willful ignorance and amplification of idiocy for profit says something about the platforms like Spotify, Steam, and YouTube. But it says worse things about us. Because we are the people filling our brains, hearts, and souls with this "content".

The impulse to limit what content or media people can consume has been around for a long time, frequently driven by concerns about the negative influence of the content and/or media on youth. If you can think of a medium, it has likely been accused of corrupting youth: The internet. Video games. Television. Role-playing games like "Dungeons and Dragons". Movies (usually the kind with sex, but occasionally the kind with violence). Radio. Rock music/hip-hop/jazz. Comic books. Novels (I am not kidding). Probably Greek drama and cave paintings.

On the one hand, a whole bunch of research has shown media consumption has, at most, minimal effect on people's behaviors (and there's a lot of uncertainty about whether consuming violent media makes people more violent, or if people with high tendencies towards violence prefer more violent media).

But we also know "you are what you eat", and speak frequently about how media "changed our lives". It is sometimes intended as a joke, sometimes as hyperbole. But still.

Does consumption of media, of art, have no effect at all on us? If it does have no effect, well, sorry, artists. You've been wasting your time. But if it does have an effect, even small and/or temporary, the implications seem obvious.

It would mean the entire production chain -- artists, businesspeople, distributors, etc. -- have a responsibility to think about what they are putting out there, wrap it in warnings, and make sure it is only consumed by those of appropriate age.

It would also mean that we, as individuals and media consumers, have a responsibility to think about what we put in our heads.

No comments: