EU judges to tackle 'right to be forgotten' again
Paris, May, 19th, 2017 (Reuters). The "right to be forgotten" - or stopping certain web search results from appearing under searches for people's names - will be debated at the European Union's top court after Alphabet Inc's Google refused requests from four individuals. In May 2014, the Court of Justice of the European Union (ECJ) ruled that people could ask search engines, such as Google and Microsoft's Bing, to remove inadequate or irrelevant information from web results appearing under searches for people's names - dubbed the "right to be forgotten". Google has since received over 720,000 removal requests and accepted about 43 percent of them, according to its transparency report. Four individuals who had asked Google to remove links to webpages about them appealed to the French data protection authority after the search engine company refused their request. The French privacy regulator, the CNIL, agreed with Google's decision, prompting the individuals to take their case to the French Conseil d'Etat, France's supreme administrative court, which referred it to the Luxembourg-based ECJ. The ECJ "now has to decide whether 'sensitive personal data' — such as the political allegiance of an individual, or a past criminal conviction reported in the press — should always outweigh the public interest", Google's senior privacy counsel Peter Fleischer wrote in a blogpost. "Requiring automatic delisting from search engines, without any public interest balancing test, risks creating a dangerous loophole. Such a loophole would enable anyone to demand removal of links that should remain up in the public interest, simply by claiming they contain some element of sensitive personal data." A Conseil d'Etat statement said the requests from the individuals concerned a video that "explicitly revealed the nature of the relationship that an applicant was deemed to have entertained with a person holding a public office"; a press article on the suicide of a member of the Church of Scientology mentioning that one of the applicants was the public relations manager of that church; several articles related to criminal proceedings of an applicant; and articles about the conviction of another applicant for having sexually abused minors. The French court said a number of "serious issues" had arisen with regard to the interpretation of European law in the case before it. "Such issues are in relation with the obligations applying to the operator of a search engine with regard to web pages that contain sensitive data, when collecting and processing such information is illegal or very narrowly framed by legislation, on the grounds of its content relating to sexual orientations, political, religious or philosophical opinions, criminal offences, convictions or safety measures," the court said. The CNIl declined comment at this point of the court procedure. A date for the hearing has not been set. "We will be advocating strongly for the public interest balancing test to apply to all types of delisting requests—including those containing sensitive personal data," Fleischer said.
Twain: a news app that has no human editor
Croatia, April, 21st, 2017 (Niemanlab) The news and information we consume is becoming increasingly tailored to our interests and habits. Our Facebook feeds are algorithmically controlled, and publishers such as The New York Times — which is reportedly planning a personalized homepage — are also attempting to mold their content to readers’ preferences. A new app, however, is trying to depersonalize the news. The app, Twain, attempts to provide users an overview of the stories that are trending across the Internet that they might not otherwise see. Twain has no human editors. Instead, it relies on “100 custom-designed algorithms and processes” to scan sites around the internet and judge what’s most popular at that moment, said Twain founder Miran Pavic, who is based in Croatia and is also content director for Croatian news site Telegram. Many of the factors that the app takes into consideration — the numbers of shares, retweets, and likes — are straight from social. But this is an attempt to “distribute it in a way that’s not going to be in a bubble,” Pavic said. The app is divided into three sections — a main timeline; individual news topics, such as North Korea’s recent failed missile launch or the new Star Wars trailer; and categories, such as politics, style, and sports. “The idea is to get the editor out of the way,” Pavic said. The app scrapes sources ranging from Reuters and CNN to Medium and Reddit. The algorithm, Pavic said, weighs factors about how articles are being shared across the internet, then surfaces those it deems to be above average. “The idea is really to find stuff that’s extraordinary,” Pavic said, though he didn’t want to go into too much detail about how the process works. Pavic said the app’s administrators are aware that the algorithmically controlled nature of the app could result in the spread of misinformation, and while he said Twain will remove sites that publish demonstrably false information, he said the app is “not going to ban publishers that put a certain spin on things.” He argued that fake news is able to thrive on platforms like Facebook because News Feed is designed to promote stories it knows you will agree with. “Personalized algorithms make it less likely that Salon readers will bump into Breitbart content; Twain, on the other hand, makes that more likely to happen, and more desirable,” Pavic said in an email. “Discovery works best if you get exposed to stuff you never knew existed.” The challenge for Twain — and most apps, for that matter — is to get users to download it, in a time when, according to a Pew report last year, 44 percent of Americans already get their news from Facebook and most use only a handful of apps. There’s also no dearth of news discovery apps.