Connect with us

Anonymous

News

AI Can Predict If Someone Will Commit Suicide Years in Advance, Study Claims

Using such info as data from Facebook messenger, researchers from Florida State University claim artificial intelligence can predict whether someone will commit suicide up to 2 years in advance with 80- 90% accuracy.

According to Hindustan Times:

“Artificial intelligence is also increasingly seen as a means for detecting depression and other mental illnesses, by spotting patterns that may not be obvious, even to professionals. A research paper by Florida State University’s Jessica Ribeiro found it can predict with 80 to 90 percent accuracy whether someone will attempt suicide as far off as two years into the future.

Facebook uses AI as part of a test project to prevent suicides by analysing social network posts. And San Francisco’s Woebot Labs this month debuted on Facebook Messenger what it dubs the first chatbot offering “cognitive behavioural therapy” online – partly as a way to reach people wary of the social stigma of seeking mental health care.”

Academics are using artificial intelligence connected to Facebook data, in order to potentially provide a service that watches their mental health. Doesn’t sound Orwellian at all…

The mainstream article cited several examples of AI having success:

“-California researchers detected cardiac arrhythmia with 97 percent accuracy on wearers of an Apple Watch with the AI-based Cariogram application, opening up early treatment options to avert strokes.

-Scientists from Harvard and the University of Vermont developed a machine learning tool – a type of AI that enables computers to learn without being explicitly programmed – to better identify depression by studying Instagram posts, suggesting “new avenues for early screening and detection of mental illness.”

– Researchers from Britain’s University of Nottingham created an algorithm that predicted heart attacks better than doctors using conventional guidelines.”

But on the contrary, many predictions with AI have been wildly false. And further, the institutions involved with allegedly functional or beneficial AI applications are “the system” itself: institutions that work against the good of the common people, for such entities as the US military industrial complex. They can’t be trusted.

Harvard, the University of Nottingham, and other prestigious universities are the very academic bowels of “the system” from which some of the most dangerous technology comes, which is used in warfare and against civilian populations. Those prestigious academics aren’t good guys. Anyone with knowledge of their history would be very suspicious of their efforts to create anti-suicide AI.

In a Wired article titled “Predicting the future of artificial intelligence has always been a fool’s game,” one expert said philosophers were more accurate than scientists on AI predictions:

Later experts have suggested 2013, 2020 and 2029 as dates when a machine would pass the Turing test, which gives us a clue as to why Armstrong feels that such timeline predictions — all 95 of them in the library — are particularly worthless. “There is nothing to connect a timeline prediction with previous knowledge as AIs have never appeared in the world before — no one has ever built one — and our only model is the human brain, which took hundreds of millions of years to evolve.”

His research also suggests that predictions by philosophers are more accurate than those of sociologists or even computer scientists. “We know very little about the final form an AI would take, so if they [the experts] are grounded in a specific approach they are likely to go wrong, while those on a meta level are very likely to be right”.

Although, he adds, that is more a reflection of how bad the rest of the predictions are than the quality of the philosophers’ contributions.

Beyond that, he believes that AI predictions as a whole have all the “characteristics of the kind of tasks that experts are going to be bad at predicting”.

In particular it is the lack of feedback about the accuracy of predictions about AI that leads to what has been called the “overconfidence of experts”, Armstrong argues. Such “experts” include scientists, futurologists and journalists. “When experts get immediate feedback as to whether some prediction is right or wrong then they are going to get better at predicting. Without it, everyone is overconfident as they are making quite definite predictions on pretty much no evidence at all.”

Perhaps common sense can settle this: if a person was going to commit suicide, how would those circumstances be evident 2 entire years in advance? And to anyone reading this, are you so predictable that an artificial intelligence could predict your suicide attempt?

One would need very specific circumstances for that scenario: it would have to be a person that obviously has nothing to live for, has a long, consistent history of self destructive behavior or depression, ect. Even then, it would take an anomalously predictable person for this to be accurate.

Or… they would just need surveillance to predict it, and that’s what system-sponsored academics are really being paid to push for.

(Image credit: HighQFX, Pininterest)

Continue Reading
Advertisement
You may also like...

Deneb Verdad is a researcher and writer from North Highlands, California. His topics of interest include mapping out the world's nefarious powerful people and entities, DARPA, technocracy, biological warfare, and others.

Comments

More in News

facebook_00000
To Top