There's some estimates that AI could be responsible for as much as 50% of the content on the internet.
You can just make the neural net read hundreds of pages of posts and tweets in favour of a subject, then press play. And people won't know the difference.
I think I found one on reddit around the time of Charlottesville. I had a fairly detailed argument and rebuttal to their rebuttal. Then it went all wierd and noticably very vague, like it suddenly shifted into another persona / context. And that's a tell-tale sign of AI - it can make the right noises in one context, pick out the right words to talk about, but at some point, it can get out the bounds of it's area of expertise, the situations where it works very well. Then you become an 'edge case' and it gives you very mediocre responses, it's using words and phrases which come from a very different motivation or state of mind. The character falls apart in our eyes. But until that point, I was convinced I was talking to a human.
And that's because - at least on the internet - 99.9% of what we write is us saying the same thing as someone else, very slightly differently. And people have replied to and argued against our posts. They are the same, saying the same thing as someone else who has made that argumment.
We're all on autopilot. The trick is - very genuinely - to get to a personal level. Bring the individual humanity into it, the sort of thing that emerges in close physical situations, the ways of thinking and responding that are routinely left by the wayside when we are on t'internet.
It's actually very troubling to see how autopiloty we all are. People point at the other side and say that's what they are. But it's rubbish, we're all at it. All the time. It stems from being on a side in the first place. And it stems from our relationship to language.