One thing is for sure, and that is the approach to police work has radically changed in the last decades, thanks to modern technology. We have filled our streets with surveillance cameras which will soon be enabled with face-recognition capabilities. And DNA-testing of evidence gathered at crime scenes is even helping find culprits who might have walked free not too long ago.

All of these tools are useful after a crime has been committed, but the ultimate holy grail of crime fighting is to prevent the act before it even happens. Sometimes that can be accomplished through clever investigation that prevents a plot involving several delinquents who have clumsily left a trail to follow –the kind of stuff detective novels deal about. But would it be possible to actually predict a crime, as in a real-life version of Minority Report?

Earlier this month an article written by John R. Quain for Digital Trends discussed the work of an Israeli tech company named Cortica, which recently partnered with the Indian company Best Group in order to analyze the terabytes of data streaming from CCTV cameras in public areas. Their goal is to improve the safety of public areas by preventing violent crimes such as sexual assault, by way of looking at “behavioral anomalies” that would allegedly signal when someone is about to commit violence against another person.

The software is based on the type of military and government security screening systems that try to identify terrorists by monitoring people in real-time, looking for so-called micro-expressions — minuscule twitches or mannerisms that can belie a person’s nefarious intentions. Such telltale signs are so small they can elude an experienced detective but not the unblinking eye of AI.

In his magnum opus 1984, George Orwell described the ‘thought police’ in charge of detecting enemies of the party not as endowed with true telepathy, but instead as highly trained in human psychology and behavior, so that they were able to detect the most minutes betrayals of the individual’s inner thoughts, despite their best efforts to hide them away behind a mask of compliance when surrounded by others, or in front of the ‘telescreens’ located all over the dystopian state of Oceania. Cortica’s software seems to be the enabling of such ‘mind-reading’ through the tremendous horsepower of artificial intelligence algorithms combined with camera surveillance.

But what about true precognition? The ability to predict a future criminal event before anyone –perhaps not even the perpetrator himself!– knows that it will happen? Believe it or not, that could one day become a reality, too.

Dr. Dean Radin, on a recent interview for the podcast Radio Misterioso where he was invited to promote his most recent book Real Magic, talked about one of the many projects he’s currently working on, involving the analysis of ‘sentiment’ found in the content of Twitter messages, to see if it is possible to predict where the next ‘lone-wolf’ mass shooter might strike. Sentiment is a set of words that describe a certain kind of emotional affect in a text, and sentiment analysis involves data-crunching the myriad of tweets published on any given time, in order to measure how a population feels based on what they are tweeting about. Dr. Radin explains there are university groups that have been analyzing Twitter’s content ever since the platform began. One of these analysis, for example, indicates that the Vegas mass shooting in 2017 was the saddest day in the entire history of Twitter.

But Dr. Radin is not so much interested in sentiment analysis rather than ‘pre-sentiment’, and trying to utilize Twitter to predict future outcomes. After all, many researchers claim Trump’s victory in 2016 could be predicted based on Twitter data prior to the election, unlike all of the traditional polls which told a different story. So getting back to the mass shooting at Las Vegas, Radin is looking at the Tweets sent one week before the event, as well as the Tweets after the event, to see if people were somehow emotionally ‘reacting’ to the shooting before it actually happened; this in accordance with the theory that all humans are endowed with some level of psychic abilities, and are capable of precognitively react to future events, at least on an unconscious level. Twitter’s pre-sentiment then, would be IMO not unlike the spikes noticed on random number generators studied by the Global Consciousness Project.

The ultimate goal of such a project of course, is to see if (a) is there a pre-sentiment of an active shooting BEFORE it happens, and more importantly (b) to see if it’s possible to predict WHERE it might occur. Now, as fascinating as this possibility seems in terms of not only proving the reality of precognition, but also establishing it as a valuable tool for the prevention of crimes, there’s no doubt that Dr. Radin’s project raises more than a few moral and ethical concerns. We already know we live in a world in which our present activity is being constantly monitored online; do we want to live in a world in which our future activity will also be predicted, and potentially used against us?

Don’t get me wrong: I think the world of Dr. Radin, and I’m also in full support of saving lives and stopping deranged individuals from unleashing violence against innocent civilians –just ask all my Twitter followers who have to put up with my gun-banning Tweets! (and NO, I don’t want to read your pro-gun arguments on this article’s comment section)– But what if other less-democratic nations for example, decided to monitor the presentiment of say, dissident behavior against their governments?

Will precog crime fighting — whether by use of A.I. behavioral monitoring or pre-sentiment analysis– turn our world into a heaven on Earth in which no human life will ever be lost because of violent acts, or a dystopia even worse than 1984? One thing I can safely predict without the use of my third eye, is that whatever our future turns out to be will be far more complicated than what Philip K. Dick or George Orwell ever envisioned.