In this “golden age” of Natural Language Processing, machines seem to get better and better at understanding human language. Are they, though? We discuss the recent wave of neural language models and review the core ideas in contemporary NLP; by highlighting some fundamental flaws in popular models, we will have a chance to revive older ideas about "meaning", and show how true understanding would require radically different architectures than the ones we have now. We will conclude by presenting with some results from our A.I. team - and present some practical applications of our favored theoretical approach.