They don't want "true threat distinction", they want to lock up anybody who makes the TSA look bad (and similar).
By that logic, they should just lock up everyone in TSA...

The only people responsible for making it look bad are TSA employees themselves, and the reason they're thought about as nothing but a bunch of thieves and bullies is that they
are nothing but thieves and bullies.
It's more for the NSA, I think, not TSA. The former have the real reason to use such an AI. Given the recent controversies, it must have become clear that they have to be darn careful about who they brand as "terrorist", especially if they want to expand. Indeed, the problem with any "acceptable" monitoring is that over 99% of what you see is not what you're looking for. Contrary to popular belief, the NSA isn't looking for any sort of "thoughtcrime" or slightest signs of dissent, but actual threats. Since jokes about terrorism vastly outnumber actual, serious mentions of it, it's more of a question of detecting whether someone is actually serious.
Still, it'd be especially problematic, considering even humans have big problems with spotting sarcasm, especially on the internet (but often in spoken conversation, too). It might be better to have a program that can "sort of tell" if someone is joking, then pass anything unsure to humans. I can see a "sarcasm and false positive detection" algorithm taking off a lot of weight from the NSA analysts' shoulders even if it's rather rudimentary. Remember, the data sets involved are immense, and effective monitoring of internet requires very good filtering systems, both human and automatic. Especially that you're risking both arresting innocents and allowing a terrorist attack to happen if they screw up.