Griefbots and Psychological Profiling
CBC Spark recently aired a puff piece about two technologists who have created chatbots to mimic loved ones who recently died. These chatbots mine digital data from the deceased and create bots who respond and interact somewhat like the dead people.
The first interviewee Muhammad Aurangzeb Ahmad seemed to have some awareness of the ethical problems here. The second interviewee Eugenia Kudya offered platitudes about how the simulation of her friend Roman could clearly not be confused about the dead person himself.
This is not exactly the same domain as psychological profiling, but there are some similarities. The goal of psychological profiling is to map what is happening internally, so that we can be understood, manipulated (oops. "influenced"), and exploited for commercial gain. The goal of these griefbots is to map what is happening on our outsides, mimicing what we do in the world to a degree that other humans are convinced that our bots resemble us in some way.
Griefbots work because we are predictable. With a relatively small amount of training data, we can train a computer to converse in a way reminiscent of a particular human being. The simulation is not perfect, but it is astonishing that this works at all.
My feeling is that we are similarly predictable when it comes to our psychological makeup, and that it will take a relatively small amount of data (say, all the data we leak when surfing the web or using our phones) to slot us into similar categories. Advertisers on Facebook and Google already have access to categories, in fact. The only thing that is missing is an association between psychological categories and the particular triggers that are effective on them. Are you willing to bet that highly-funded, highly-motivated entities will not be able to discover these triggers? I'm not, but that is why I am a dinosaur.