JessicaRulestheUniverse.com

Personal blog of Jessica Zafra, author of The Collected Stories and the Twisted series
Subscribe

Archive for the ‘Technology’

I have an issue with Uber but they refuse to listen.

December 07, 2016 By: jessicazafra Category: In Traffic, Technology 6 Comments →

uber

I use Uber a lot. It’s convenient, safe, and reduces the stress of getting around the city. I’ve written about how glad I am that Uber exists. I don’t even mind paying the surge rate (up to 5X during the Xmas season last year) as long as I am informed of it beforehand. Hey, traffic is a pain in the ass, what are you going to do.

Recently Uber updated its app. Now it doesn’t show you the surge rate. Instead you get an “Upfront Fare” that tells you how much your ride will cost. It may seem like a good idea—except that that is not what you end up paying.

This morning I took Uber to the Rockwell area. The “Upfront Fare” was Php55. The actual fare when I got to my destination was Php100.

This afternoon I took Uber back to my house, roughly the same distance as my morning trip. The “Upfront Fare” was Php170. The actual fare I was charged when I got home: Php248.

I understand that the effects of road traffic cannot be predicted exactly, but knowing the surge rate would give me a more accurate idea of how much I would have to pay. The “Upfront Fare” is completely unreliable, being nearly 50 percent off the final fare. I would prefer not to get a shock when the driver gives me the total (I use the cash option, so I am more aware of what I pay than if I charged it to a card).

As I am trying to be cheerful, I thought I would take it up with Uber instead of getting angry. In the past, I could report issues to Uber by replying to the receipt they email after each ride. Turns out you can’t do that anymore. I got an automatic reply that said the address does not accept incoming email. I was advised to go to “Help” in the Uber app or go to help.uber.com on the web.

So I did that. But the Help menu on both app and website has limited options, none of which cover “Your Upfront Fare is not very upfront, and I would prefer to see the surge rate so I don’t get a shock.” I tried reporting my issue under “I lost an item” and “I had an issue with a receipt or payment option” to no avail.

help

I asked my terrifyingly efficient sister how I could contact Uber, and she suggested their Facebook page. As I am not on the social media, I asked her to relay my message for me. Here is their reply:

Hello, just go to your History in Uber app, choose the trip that got issue and submit a note, we’ll follow up.

In short, Uber does not care to listen. And if Uber will not listen, I do not have to use Uber.

Uber, I want a reply. Readers, could you do me a favor and pass this on to them?

In the meantime I will use Grab. I have found Grab to be slightly more expensive, but at least they tell you what the final fare is as soon as you book.

* * *
Update on 13 December: The surge rate is back! When you request a ride, the surge rate appears.

See? Was that so difficult? Uber-ing again.

* * *

Update on 15 December: Since that time the surge rate window has not appeared again,

On the Internet, nearly everything conspires against the truth.

November 07, 2016 By: jessicazafra Category: Books, Current Events, Technology 1 Comment →

d7wglp3p2lsprmigcqqb
Doctor Strange sends some journalists to a hell dimension.

We are living in the post-fact age. The more I see what’s going on, the more thankful I am that I was born in the analog world (a.k.a. old).

Digital technology has blessed us with better ways to capture and disseminate news. There are cameras and audio recorders everywhere, and as soon as something happens, you can find primary proof of it online.

You would think that greater primary documentation would lead to a better cultural agreement about the “truth.” In fact, the opposite has happened.

Documentary proof seems to have lost its power. If the Kennedy conspiracies were rooted in an absence of documentary evidence, the 9/11 theories benefited from a surfeit of it. So many pictures from 9/11 flooded the internet, often without much context about what was being shown, that conspiracy theorists could pick and choose among them to show off exactly the narrative they preferred. There is also the looming specter of Photoshop: Now, because any digital image can be doctored, people can freely dismiss any bit of inconvenient documentary evidence as having been somehow altered.

One of the apparent advantages of online news is persistent fact-checking. Now when someone says something false, journalists can show they’re lying. And if the fact-checking sites do their jobs well, they’re likely to show up in online searches and social networks, providing a ready reference for people who want to correct the record.

But that hasn’t quite happened. Today dozens of news outlets routinely fact-check the candidates and much else online, but the endeavor has proved largely ineffective against a tide of fakery.

That’s because the lies have also become institutionalized. There are now entire sites whose only mission is to publish outrageous, completely fake news online (like real news, fake news has become a business). Partisan Facebook pages have gotten into the act; a recent BuzzFeed analysis of top political pages on Facebook showed that right-wing sites published false or misleading information 38 percent of the time, and lefty sites did so 20 percent of the time.

“In many ways the debunking (of misinformation) just reinforced the sense of alienation or outrage that people feel about the topic, and ultimately you’ve done more harm than good.”

ycyczfgvswhidfp966wu
Dormammu makes Doctor Strange’s head explode.

Read How the Internet is Loosening Our Grip on the Truth, and make sure you have chocolate, a stiff drink, or a snuggly cat to console you afterwards. Or look at this selection of the weirdest Doctor Strange moments in the comics.

Long weekend links: Social media creates angry partisans, how to tell if you’re a jerk, and what earwax is for

October 30, 2016 By: jessicazafra Category: Health, Language, Psychology, Technology No Comments →

Are You A Jerk? (with attempts at definitions of jerk and asshole)

10390_7f96b48eff9b7a8f8ebe955363b7b260
Illustration from Nautilus by Jackie Ferrentino

The scientifically recognized personality categories closest to “jerk” are the “dark triad” of narcissism, Machiavellianism, and psychopathic personality. Narcissists regard themselves as more important than the people around them, which jerks also implicitly or explicitly do. And yet narcissism is not quite jerkitude, since it also involves a desire to be the center of attention, a desire that jerks don’t always have. Machiavellian personalities tend to treat people as tools they can exploit for their own ends, which jerks also do. And yet this too is not quite jerkitude, since Machivellianism involves self-conscious cynicism, while jerks can often be ignorant of their self-serving tendencies. People with psychopathic personalities are selfish and callous, as is the jerk, but they also incline toward impulsive risk-taking, while jerks can be calculating and risk-averse.

Another related concept is the concept of the asshole, as explored recently by the philosopher Aaron James of the University of California, Irvine. On James’s theory, assholes are people who allow themselves to enjoy special advantages over others out of an entrenched sense of entitlement. Although this is closely related to jerkitude, again it’s not quite the same thing. One can be a jerk through arrogant and insulting behavior even if one helps oneself to no special advantages.

(more…)

AI in the war against troll farms and outsourced online hatred

September 21, 2016 By: jessicazafra Category: Current Events, Technology 1 Comment →

Manila, Philippines. August 28, 2014. An employee working as a content moderator for Task Us sits in front of her computer at her cubicle on the 11th floor of the SM Aura Office Building Tower in the Taguig district of Manila. Task Us is an American outsourcing tech company with offices in the Philippines. (Photo by Moises Saman/MAGNUM)
Companies like Facebook and Twitter rely on an army of workers employed to soak up the worst of humanity in order to protect the rest of us. It’s a soul-killing job better left to AI. Photo: A content moderator from TaskUs in BGC.

Mass harassment online has proved so effective that it’s emerging as a weapon of repressive governments. In late 2014, Finnish journalist Jessikka Aro reported on Russia’s troll farms, where day laborers regurgitate messages that promote the government’s interests and inundate oppo­nents with vitriol on every possible outlet, including Twitter and Facebook. In turn, she’s been barraged daily by bullies on social media, in the comments of news stories, and via email. They call her a liar, a “NATO skank,” even a drug dealer, after digging up a fine she received 12 years ago for possessing amphetamines. “They want to normalize hate speech, to create chaos and mistrust,” Aro says. “It’s just a way of making people disillusioned.”

All this abuse, in other words, has evolved into a form of censorship, driving people offline, silencing their voices. For years, victims have been calling on—clamoring for—the companies that created these platforms to help slay the monster they brought to life. But their solutions generally have amounted to a Sisyphean game of whack-a-troll.

Now a small subsidiary of Google named Jigsaw is about to release an entirely new type of response: a set of tools called Conversation AI. The software is designed to use machine learning to automatically spot the language of abuse and harassment—with, Jigsaw engineers say, an accuracy far better than any keyword filter and far faster than any team of human moderators. “I want to use the best technology we have at our disposal to begin to take on trolling and other nefarious tactics that give hostile voices disproportionate weight,” says Jigsaw founder and president Jared Cohen. “To do everything we can to level the playing field.”

Jigsaw is applying artificial intelligence to solve the very human problem of making people be nicer on the Internet.

Read it.

Friending, trending, messaging: You’ve been verbed

September 05, 2016 By: jessicazafra Category: Language, Technology 8 Comments →

Verbed

Mothers and fathers used to bring up children: now they parent. Critics used to review plays: now they critique them. Athletes podium, executives flipchart, and almost everybody Googles. Watch out—you’ve been verbed.

The English language is in a constant state of flux. New words are formed and old ones fall into disuse. But no trend has been more obtrusive in recent years than the changing of nouns into verbs. “Trend” itself (now used as a verb meaning “change or develop in a general direction”, as in “unemployment has been trending upwards”) is further evidence of—sorry, evidences—this phenomenon…

New technology is fertile ground, partly because it is constantly seeking names for things which did not previously exist: we “text” from our mobiles, “bookmark” websites, “inbox” our e-mail contacts and “friend” our acquaintances on Facebook —only, in some cases, to “defriend” them later. “Blog” had scarcely arrived as a noun before it was adopted as a verb, first intransitive and then transitive (an American friend boasts that he “blogged hand-wringers” about a subject that upset him). Conversely, verbs such as “twitter” and “tweet” have been transformed into nouns—though this process is far less common.

Sport is another ready source. “Rollerblade”, “skateboard”, “snowboard” and “zorb” have all graduated from names of equipment to actual activities. Football referees used to book players, or send them off: now they “card” them. Racing drivers “pit”, golfers “par” and coastal divers “tombstone”.

Read it in 1843.

How technology disrupted the truth (On social media, ‘truth’ equals ‘likes’)

July 18, 2016 By: jessicazafra Category: Technology No Comments →

5000-1
Illustration: Sébastien Thibault in The Guardian

Algorithms such as the one that powers Facebook’s news feed are designed to give us more of what they think we want – which means that the version of the world we encounter every day in our own personal stream has been invisibly curated to reinforce our pre-existing beliefs. When Eli Pariser, the co-founder of Upworthy, coined the term “filter bubble” in 2011, he was talking about how the personalised web – and in particular Google’s personalised search function, which means that no two people’s Google searches are the same – means that we are less likely to be exposed to information that challenges us or broadens our worldview, and less likely to encounter facts that disprove false information that others have shared.

Pariser’s plea, at the time, was that those running social media platforms should ensure that “their algorithms prioritise countervailing views and news that’s important, not just the stuff that’s most popular or most self-validating”. But in less than five years, thanks to the incredible power of a few social platforms, the filter bubble that Pariser described has become much more extreme.

Read Katharine Viner’s essay in the Guardian.