Quizzes & Puzzles38 mins ago
Internet Firms Must Do More On Terror?
Is it reasonable to expect organisations like Facebook (and Answerbank) to 'do more' in reporting potential terrorist plots to the authorities?
http:// www.bbc .com/ne ws/uk-3 0200311
http://
Answers
Best Answer
No best answer has yet been selected by ludwig. Once a best answer has been selected, it will be shown here.
For more on marking an answer as the "Best Answer", please visit our FAQ.With hind-sight of what occured in this particular instance, it seems to be a no-brainer.
But going forwards, it seems to be unworkable. I doubt whether there are sufficient resources available to thoroughly investigate every 'plot' reported. And, naturally, any 'plot' reported but insufficiently investigated, or ignored, will result in outrage and public inquiries should it be followed through with by the 'terrorists'.
But going forwards, it seems to be unworkable. I doubt whether there are sufficient resources available to thoroughly investigate every 'plot' reported. And, naturally, any 'plot' reported but insufficiently investigated, or ignored, will result in outrage and public inquiries should it be followed through with by the 'terrorists'.
They dont need to "read" everything...all they need is the kind of monitoring software systens that already "looks" at comms and flags up whatever its been programmed to flag up, keywords, phrases etc etc same way the banks can flag up suspicious transactions or trading patters on your account.
Still a gazillion to look at but still gazillions less than a human can filter out.
Still a gazillion to look at but still gazillions less than a human can filter out.
There's a risk in relying on key-word detection that people's totally innocent conversations can be flagged up just for discussing an issue without supporting it. Automatic technology, if not implemented properly, can suck. Rather like bad censorship of swearwords. I've never quite got over the (separate but related) shock I felt when, for whatever reason, I was discussing "a mishit" (a legitimate word), and the forum using swearword censorship technology rendered it as "mi*doodoo*". Yes, really.
In a similar way, an automated programme searching for, e.g. "Taliban" or "beheading" might lead to some gulty parties being detected earlier, but would presumably also mean people discussing Game of Thrones (or a video made by a Belgian TV company where the subtitles randomly popped up with "alqaeda" a couple of times, despite being entirely about Maths (or so I'm told!)) getting flagged up. That's not really ideal.
And anyway the is the change in philosophy -- all conversations of any kind become monitored in some way, with no cause in most cases. Even if it stays as an automated process to start with, it's hard to see how this wouldn't lead to innocent people getting caught in the net along with the intended targets.
In a similar way, an automated programme searching for, e.g. "Taliban" or "beheading" might lead to some gulty parties being detected earlier, but would presumably also mean people discussing Game of Thrones (or a video made by a Belgian TV company where the subtitles randomly popped up with "alqaeda" a couple of times, despite being entirely about Maths (or so I'm told!)) getting flagged up. That's not really ideal.
And anyway the is the change in philosophy -- all conversations of any kind become monitored in some way, with no cause in most cases. Even if it stays as an automated process to start with, it's hard to see how this wouldn't lead to innocent people getting caught in the net along with the intended targets.
-- answer removed --
// I can't believe terrorists use keywords like bombs, taliban, beheading, kill, jihad, ISIS. //
If these keywords were being looked for, there'd be a load of red lights flashing against your name now hc4361, and mine too for repeating the text.
Personally I think it's nonsense. Even if you had software that could detect this stuff, you'd still need millions of staff sorting through all the auto detected cr^p to determine which was remotely serious or relevant to anything, and that's even before you get into the question of whether internet companies should be acting as government snoops or not.
If these keywords were being looked for, there'd be a load of red lights flashing against your name now hc4361, and mine too for repeating the text.
Personally I think it's nonsense. Even if you had software that could detect this stuff, you'd still need millions of staff sorting through all the auto detected cr^p to determine which was remotely serious or relevant to anything, and that's even before you get into the question of whether internet companies should be acting as government snoops or not.
I have thought about this quite a bit, more so now this Lee Rigby story has emerged. Even on our cosy little site the number of posts reported and dealt with is probably a small percentage of those posts that "should" be dealt with but that us editors just never see.
I am mainly talking about our beloved and persistent spammers but the analogy holds. :-)
We can't be everywhere so imagine FB who have something like 24 million users that's at least that many posts per day! You just cannot trawl through that lot.
That said personally I do feel that social media sites could be held a bit more accountable for detecting terror threats, particularly.
auntie xxx
I am mainly talking about our beloved and persistent spammers but the analogy holds. :-)
We can't be everywhere so imagine FB who have something like 24 million users that's at least that many posts per day! You just cannot trawl through that lot.
That said personally I do feel that social media sites could be held a bit more accountable for detecting terror threats, particularly.
auntie xxx
Related Questions
Sorry, we can't find any related questions. Try using the search bar at the top of the page to search for some keywords, or choose a topic and submit your own question.