October | 2017 | The Knowledge Dynasty

Monthly Archives: October 2017

May to warn tech firms on terror content

Image copyright Reuters

Technology companies must go “further and faster” in removing extremist content, Theresa May is to tell the United Nations general assembly.

The prime minister will also host a meeting with other world leaders and Facebook, Microsoft and Twitter.

She will challenge social networks and search engines to find fixes to take down terrorist material in two hours.

Tech giant Google said firms were doing their part but could not do it alone – governments and users needed to help.

The prime minister has repeatedly called for an end to the “safe spaces” she says terrorists enjoy online.

Ministers have called for limits to end-to-end encryption, which stops messages being read by third parties if they are intercepted, and measures to curb the spread of material on social media.

At the general assembly on Wednesday, the prime minister will hail progress made by tech companies since the establishment in June of an industry forum to counter terrorism.

But she will urge them to go “further and faster” in developing artificial intelligence solutions to automatically reduce the period in which terror propaganda remains available, and eventually prevent it appearing at all.

 

Media playback is unsupported on your device
Media caption Google’s general counsel Kent Walker defended its anti-terrorism efforts on BBC Radio 4’s Today

Together, the UK, France and Italy will call for a target of one to two hours to take down terrorist content wherever it appears.

Internet companies will be given a month to show they are taking the problem seriously, with ministers at a G7 meeting on 20 October due to decide whether enough progress has been made.

Kent Walker, general counsel for Google, who is representing tech firms at Mrs May’s meeting, said they would not be able to “do it alone”.

“Machine-learning has improved but we are not all the way there yet,” he told BBC Radio 4’s Today programme, in an exclusive interview.

“We need people and we need feedback from trusted government sources and from our users to identify and remove some of the most problematic content out there.”

Asked about carrying bomb-making instructions on sites, he said: “Whenever we can locate this material, we are removing it.

“The challenge is once it’s removed, many people re-post it or there are copies of it across the web.

“And so the challenge of identifying it and identifying the difference between bomb-making instructions and things that might look similar that might be perfectly legal – might be documentary or scientific in nature – is a real challenge.”

A Downing Street source said: “These companies have some of the best brains in the world.

“They should really be focusing on what matters, which is stopping the spread of terrorism and violence.”

Technology companies defended their handling of extremist content after criticism from ministers following the London Bridge terror attack in June.

Google said it had already spent hundreds of millions of pounds on tackling the problem.

Facebook and Twitter said they were working hard to rid their networks of terrorist activity and support.

YouTube told the BBC that it received 200,000 reports of inappropriate content a day, but managed to review 98% of them within 24 hours.

Addressing the UN General Assembly, Mrs May will say terrorists will never win, but that “defiance alone is not enough”.

“Ultimately it is not just the terrorists themselves who we need to defeat. It is the extremist ideologies that fuel them. It is the ideologies that preach hatred, sow division and undermine our common humanity,” she will say.

‘Mystified’

A new report out on Tuesday found that online jihadist propaganda attracts more clicks in the UK than in any other country in Europe.

The study by the centre-right think tank, Policy Exchange, suggested the UK public would support new laws criminalising reading content that glorifies terror.

Image copyright AFP
Image caption IS militants are moving to less well-known sites after being chased off mainstream social media

Google said it will give £1m to fund counter-terrorism projects in the UK, part of a $5m (£3.7m) global commitment.

The search giant has faced criticism about how it is addressing such content, particularly on YouTube.

The funding will be handed out in partnership with UK-based counter-extremist organisation the Institute for Strategic Dialogue (ISD).

An independent advisory board will be accepting the first round of applications in November, with grants of between £2,000 and £200,000 awarded to successful proposals.

ISD chief executive Sasha Havlicek said: “We are eager to work with a wide range of innovators on developing their ideas in the coming months.”

A spokesman for the Global Internet Forum to Combat Terrorism, which is formed of tech companies, said combating the spread of extremist material online required responses from government, civil society and the private sector.

“Together, we are committed to doing everything in our power to ensure that our platforms are not used to distribute terrorist content,” said the spokesman.

‘International consensus’

Brian Lord, a former deputy director for Intelligence and Cyber Operations at UK intelligence monitoring service GCHQ, said the UN was “probably the best place” to raise the matter as there was a need for “an international consensus” over the balance between free speech and countering extremism.

He told BBC Radio 4’s Today programme: “You can use a sledgehammer to crack a nut and so, actually, one can say: well just take a whole swathe of information off the internet, because somewhere in there will be the bad stuff we don’t want people to see.

“But then that counters the availability of information,” adding that what is seen as “free speech” in one country might be seen as something which should be taken down in another.

Mrs May’s appearance at the UN comes days before she is due to give a major speech on Brexit – a subject that led to repeated questions from journalists on her visit.

Foreign Secretary Boris Johnson was accused of undermining her plans by writing a 4,000-word newspaper article setting out his own vision for Brexit.

Speaking to the Guardian, Mr Johnson said he was “mystified” by the row his article had prompted, saying he had “contributed a small article to the pages of the Telegraph” because critics had been saying he was not speaking up about Brexit.

Read more: http://www.bbc.co.uk/news/uk-politics-41327816

Instagram thinks sharing your friend’s rape threat will get you back on the app

An Instagram post of friends walking through the crystal-clear waves of Ibiza might get someone to check in with the photo-sharing app, but a screenshot of a rape threat? Not so much.

On Thursday, Guardian reporter Olivia Solon shared on Twitter that Instagram had tried to do such a thing with one of her posts—it shared a rape and death threat that she’d received with an undisclosed number of her Facebook friends, including her sister.

“Olivia Solon and 155 other friends are using Instagram,” the Facebook post depicted in an ad to Solon’s sister. “See Olivia Solon’s photo and posts from friends on Instagram.”

 

According to the Guardian, Instagram did not reveal the parameters for why Solon’s threatening post was chosen for sharing with her Facebook friends. It only has five likes, two of which it received in recent history, but the 20 comments of sympathetic and consoling followers may have flagged the post for high engagement.

The ad, too, wasn’t part of a paid promotion, but was instead used to “motivate” people who aren’t on the app or haven’t been in some time to look at content from their friends. Instagram didn’t reveal who the post was shared with, but said it would have been “some” of Solon’s Facebook friends.

Instagram’s rape threat flub appears to be another instance of artificial intelligence or algorithms for advertising failing consumers. Last week, ProPublica revealed that Facebook allowed ad buyers to target consumers who were interested in topics such as “Jew hater,” “How to burn Jews,” and “History of ‘why Jews ruin the world.'”

A day later, BuzzFeed reported that Google allowed targeted ads for racist keywords such as “Jewish parasite,” and “Black people ruin everything.” The Daily Beast, too, found that Twitter allowed targeted ads for users who responded to terms such as “Nazi” and “wetback.”

“We are sorry this happened—it’s not the experience we want someone to have,” an Instagram spokesperson said in a statement regarding Solon’s post. “This notification post was surfaced as part of an effort to encourage engagement on Instagram. Posts are generally received by a small percentage of a person’s Facebook friends.”

H/T the Guardian

Read more: https://www.dailydot.com/irl/olivia-solon-instagram-rape-threat/

How secure is Apple’s Face ID, really?

In its latest product event, Apple confidently moved to convince consumers that face recognition is the most convenient way to secure your phone and the sensitive information you store in it. Face ID, the company’s face recognition technology, which will be replacing its fingerprint scanner in the new iPhone X, requires you to only show your face to your phone in order to unlock it, to confirm ApplePay payments, in iTunes and App Store.

According to Phil Schiller, senior vice president of marketing at Apple, “With the iPhone X, your iPhone is locked until you look at it and it recognizes you. Nothing has ever been more simple, natural, and effortless. This is the future of how we’ll unlock our smartphones and protect our sensitive information.”

To be sure, showing your face to your phone is easier than typing a passcode or pressing your finger against a scanner. It saves you a few seconds, you obviously can’t forget it, and it won’t be affected by moisture and oil.

But is it more secure?

Here are the key things you should consider about facial recognition before you enroll in the latest fad that is overcoming the iPhone and other major smartphones.

Can Face ID be spoofed?

Face recognition authentication has existed for several years, but it has become notoriously renowned for its security flaws. Researchers and cybercriminals have been able to easily circumvent face locks on various devices by using hi-res pictures and videos of the owners. And as opposed to passwords, your face is not a secret. It’s available to anyone who Googles your name or gets close enough to snap a picture of you. Even Samsung’s S8 face lock was proven to be fooled by a photo.

However, Face ID has incorporated a technology to make it exponentially harder to bypass the lock. During setup, Face ID projects 30 thousand infrared dots to create a 3D depth map of its owner’s face. It subsequently uses that map during authentication to make sure that it’s a real face standing before the camera and the physical features correspond to those of the owner.

READ MORE:

Getting around depth maps will be much more difficult than using flat images. Apple says not even professionally made masks will work. Some experts believe it’s not impossible to fool, however, and it’s only a matter of time and “enough external data” before the technology can be sidestepped. And per Apple, if you have an identical twin, Face ID may be fooled to mistake them for you.

Further, depth sensors like the ones used in the iPhone X do have their own technical challenges. They might fail under distinct conditions such as intense light or when you’re wearing a hat or scarf. Apple says that it works under various conditions, but we’ll have to certify when the device actually ships.

Can Face ID be forcibly activated?

This is a question that regards all biometric authentication mechanisms, including fingerprint scanners. If you’re captured by criminals or taken into the custody of law enforcement, can they unlock your phone by holding it up to your face?

Unfortunately, they can. The technology doesn’t work if you’re not staring at it or if you close your eyes but is not yet smart enough to understand the difference between a real unlock attempt and a forced one (maybe someday it will). In the case of police, at least, they would legally be required to obtain a warrant before forcing you to unlock the device, according to legal experts.

Apparently, Apple recognizes this as a possible flaw in its technology. In iOS 11, users have to enter the iPhone’s passcode when connecting it to a new computer. This will make it harder to siphon data from a phone unlocked forcibly. Apple has also made it possible to disable Face ID and Touch ID, its fingerprint-scanning technology, by pressing the Home or Power button (depending on the device model) five times in rapid succession.

Where does Apple store your face data?

Your mug is not the most private part of your body. Governments have huge databases of citizens pictures, the internet may be flooded with pictures of you and your friends if you’ve been on social media in the past years, and facial recognition is already a serious privacy concern.

Nonetheless, you should be concerned about where your data is stored and how secure it is, especially the depth map of your face, which is still somewhat private. Most facial recognition software relies on machine learning algorithms, programs that work with huge data sets that are stored on cloud servers. Companies running these types of software need to collect more and more data samples to improve their performance. They might also mine the data for other commercial purposes or share it with third parties.

For the moment, Apple has made it clear that no face data will be leaving your phone, the same approach it has used on Touch ID. Everything will be computed on the device thanks to its powerhouse A11 processor, and sensitive data will be stored on the Secure Enclave, the most secure component of the iPhone.

Screenshot via Apple

Apple’s Phil Schiller shows off Face ID on the iPhone X.

How much data does Face ID collect?

This is perhaps the creepiest side of Face ID. The technology has no manual trigger on iPhone X. You only need to hold it in front of your face to activate it, which means it’s always watching, waiting for your face to show up. How much data it stores is an open question.

But we’ve seen similar functionality cause privacy controversies in the Echo, Amazon’s smart home system. And unlike the Echo, your iPhone doesn’t remain in your home. You take it with you wherever you go.

Moreover, there’s the question of what Apple will do with the technology once it has access to millions of people’s faces. The company didn’t have much incentive to collect fingerprint data. But face and gaze information is a totally different matter and can be used for things such as tracking attention and reaction to ads. We’ll have to see if Apple will resist the urge to make use of the technology in other potentially profitable endeavors.

For most users, Face ID will provide a secure and reliable way to protect your iPhone, with decent workaround against most of its flaws. Apple says it has 1/1,000,000 chance of getting unlocked by someone other than you, as opposed to TouchID, which stood at 1/50,000.

However, if you prefer privacy over convenience (as I do), remembering and typing a passcode is a small price to pay for higher security.

Ben Dickson is a software engineer and the founder of TechTalks. Follow his tweets at @bendee983 and his updates on Facebook.

Read more: https://www.dailydot.com/layer8/iphone-x-face-id/

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Categories

Coursera

New Skills, New You: Transform your career in 2016 with Coursera

Follow us on Twitter