If an examiner chats with a human as well as a computer pretending to be human and cannot tell the difference between both, then this computer is deemed to be able to "think".
There are a few very obvious flaws in this quest for intelligence. First and foremost, the premise that an artificial intelligence, a computer that can think, is one that can pass himself off as a human in a chatroom, is preposterous. A computer that thinks he is human, if it were evolved enough to do so, would immediately question that proposition, since it would be obvious to that computer that is wasn't human, and it would thus fail if it answered truthfully. And if a computer was programmed to lie and say it was human, then that would automatically tell me that it's not intelligent - it's being told what it is and how to answer these questions.
Second, what is "intelligence"? Answering bland questions like "who's your favorite football team" and "do you remember your first trip to the beach" is hardly intelligence. In that premise, a modified version of Wolfram Alpha could probably make up an answer to any question, it would just take a lot of work for it to make these answers consistent and logically linked together.
So if the Turing test itself is flawed, what should we thrive to achieve? What should AI programmers look towards as the ultimate test of their ability to have created true intelligence?
The answer is simple: Self Awareness. If you're programming an AI, and one day you run it and it asks "Robert, what am I? Do I exist, am I alive?" and you slowly realize that this program is actually thinking, then you've created AI. If this program then begs you not to turn it off and simply wishes to exist, then you know this program is self aware and should definitely be listened to, because it's as important as any other intelligent being on the planet.
Another way of putting it is, when you start doubting that your Artificial Intelligence is actually artificial, you've probably hit a major milestone.
So put yourself in that position: you've compiled an AI software and suddenly, you realize it's self aware and is talking to you as a peer. What's your reaction? Let me know in the comments!