NLP and NLP Manipulate You & how AI might be used to guide the masses — Seekers

João Cruz
4 min readApr 16, 2020



There’s a lot of talk about what the dangers of AI might be and when the subject is brought to the table there’s always the inevitable reference to 2001 Space Odyssey’s infamous Al. Although this is a discussion worth having (and we might write an article on that soon) it is usually put in the place in time years away from now… But reality might be more “Kubrik-like” than Al himself, who debuted just before the 70’s. So let’s raise the question: is AI taking over already? And how? Our bet is on yes and we want you to have an eye out for it.


Let’s begin with an exercise:

  • Seat up on your chair and relax;
  • Picture several trees and among them, dwarves heading to work;
  • Focus on the last dwarf on the right. You might sense it’s a lucky dwarf by his looks;
  • Perhaps he is wearing glasses, green transparent, and sipping on a soda bottle…
  • Close your eyes and focus on that for as long as you’d like to get a clear picture in your mind.


Grab a sheet of paper and write down the first number that comes to mind. Fold it twice and set it aside, for now.


You’ve probably seen the above structure at play a few times, in magic tricks, when the magician guesses the number someone is thinking of. The technique is quite reliable and it works with something that NLP calls hypnotic suggestion.

Now, if you are savvy in AI you are probably thinking: “what on hearth does Natural Language Processing” have to do with hypnosis? We’re talking about Neuro linguistic Programming (let’s call it NeuroLP), but don’t worry, we’ll tie them up together nicely! NeuroLP was developed as a way to model neural and linguistic patterns of experts in order to help non-expert people replicate excellent results. One of the experts modelled in the early NeuroLP days was the hypnosis genius, Dr. Milton Erickson, who mastered the power of suggestion as a way to help his psychotherapy patients.

With the rise of NLP applications in AI, we’ve seen tools such as natural speech generation, improve dramatically and today we even have it on our phone. Every time you type, your phone gives you suggestions as what you might want to type next, it’s called auto-completion. How does it work? Basically, a machine learning algorithm has been trained on thousands of written documents and established a relationship between different words and how likely they are to be used together. From then on, the algorithm continues learning as people use it: the more a word suggestion is used, the more it will be suggested. You’ve probably spotted the reinforcing loop here, and that’s exactly the core of the question.


If you did the exercise in the beginning of this article correctly, you probably have a piece of paper next to you with the number seven written down. There’s a 80% chance that you picked number seven because there was a strong suggestion for you to do so, hidden in the article itself:

  • Did you notice the alarm clock on the cover, marking seven o’clock?
  • There’s also a good amount of words that sound similar to seven on the text (setup, seat, several, sense).
  • And dwarves heading to work? How many dwarves to you know who cross the forest to go to work?
  • Lucky dwarf? Most people will tell you seven is their lucky number.
  • Green transparent (…) soda bottle… Rings a bell?

Let’s go back to your phone and the auto-completion AI: See how those words popup on your screen, discretely as you type? That’s a suggestion but not just a conscious one. As the algorithm behind it becomes more and more trained it will drive people’s speech to a convergence point with most used words being pushed into people’s texts. There are ways to push the algorithm to diversify its use of words, but is that enough to prevent a “content bias”? And like in any other AI, the real danger is in the hand of those using it. What if a foreign Government decides to push online speech towards a certain sentiment and throw hints of hypnotic suggestion through auto-complete to sway votes among the undecided electorate?

Key Takeaways

  • Humans are not aware of how much their own thoughts are prone to being influenced by outside suggestion that is pushed under the radar towards the subconscious;
  • AI’s influence and control might already be happening, just not how we thought they would be;
  • The negative impact of AI is not bred by AI’s intents or goals, but motivated by those who build, manage or manipulate AI for their own purposes.

Originally published at on April 16, 2020.