Preview lessons, content and tests

Computer Science & Programming solved. All in one platform.

1. To trial the platform and take tests, please take a few seconds to SIGN UP and SET UP FREE.

2. Searching for something specific? See our text overview of all tests. Scroll right for levels, and lists.

3. Student and Teacher User Guides |  Schemes of Work |   Real Teacher use Videos |


Join 36000+ teachers and students using TTIO.

Latest advancements in NLP

Some of the recent advances in the field of Natural Language Processing are given below

1 - Attention is All You Need

Google AI -June 2017

"Attention is all you need" this was a research paper published by Google employees. Ashish Vaswani et. al. published this paper which revolutionized the NLP industry. It was the first time the concept of transformers was referenced. Before this paper, RNN and CNN were used in the field of NLP but they had two problems

  • Dealing with long term dependencies
  • No parallelization during training

RNNs were not able to deal with long-term dependencies even with different improvements like Bidirectional RNNs or LSTMs and GRUs. Transformers with self-attention came to the rescue of these problems and made a breakthrough in NLP. It was state-of-the-art for seq2seq models which are used for language translation.

2 - ULMFiT (Universal Language Model Fine- Tuning)

fast.ai -May 2018

The other most important development was the use of transfer learning in the field of NLP. This language model introduced the concept of transfer learning to the NLP community. It is a single universal language model fine-tuned for multiple tasks. The same model can be fine-tuned to solve 3 different NLP tasks. AWD-LSTM forms the building block of this model. AWD stands for Asynchronous Stochastic Gradient Descent(ASGD) Weight Dropped.

3 - BERT (Bidirectional Encoder Representation from Transformers)

Google AI -November 2018

It uses the concept of both the above-mentioned advancements i.e. transformers and transfer learning. It does full bidirectional training of transformers. It is a SOTA(state-of-the-art) model for 11 NLP tasks. It is pre-trained on the whole English Wikipedia dataset consisting of almost 2.5 billion words.

4 - Google's Transformer-XL

Google AI -January 2019

This model outperformed even BERT in Language Modeling. It also resolved the issue of context fragmentation which was faced by the original Transformers.

5 - Stanford NLP

Stanford University -January 2019

The official site defines it as -
StanfordNLP is a Python natural language analysis package. It contains tools, which can be used in a pipeline, to convert a string containing human language text into lists of sentences and words, to generate base forms of those words, their parts of speech and morphological features, and to give a syntactic structure dependency parse.
It contains pre-trained neural models for 53 human languages. Thus increasing the scope of NLP to a global level instead of being constricted to just English.

6 - OpenAI's GPT-2

OpenAI -February 2019

GPT-2 stands for “Generative Pretrained Transformer 2” as the name suggests it is basically used for tasks concerned with the natural language generation part of NLP. This is the SOTA model for text generation. GPT-2 has the ability to generate a whole article based on small input sentences. It is also based on transformers. GPT-2 achieves state-of-the-art scores on a variety of domain-specific language modeling tasks. It is not trained on any of the data specific to any of these tasks and is only evaluated on them as a final test; this is known as the “zero-shot” setting.

Source: https://dev.to/amananandrai/recent-advances-in-th

www.teachyourselfpython.com