Natural Language Processing for Text Summarization paid course free. You will Understand the basic theory and implement three algorithms step by step in Python! Implementations from scratch in this complete course.
- Understand the theory and mathematical calculations of text summarization algorithms
- Implement the following summarization algorithms step by step in Python: frequency-based, distance-based, and the classic Luhn algorithm
- Use the following libraries for text summarization: Sumy, by summarization and BERT summarizer
- Summarize articles extracted from web pages and feeds
- Use the NLTK and spaCy libraries and Google Colab for your natural language processing implementations
- Create HTML visualizations for the presentation of the summaries
Natural Language Processing for Text Summarization Course Requirements
- Programming logic
- Basic Python programming
Natural Language Processing for Text Summarization Course Description
The area of Natural Language Processing – PLN (Natural Language Processing – NLP) is a subarea of Artificial Intelligence that aims to make computers capable of understanding human language, both written and spoken. Some examples of practical applications are: translators between languages, translation from text to speech or speech to text, chatbots, automatic question and answer systems (Q&A), automatic generation of descriptions for images, generation of subtitles in videos, classification of sentiments in sentences, among many others!
Another important application is automatic document summarization, which consists of generating text summaries. Suppose you need to read an article with 50 pages, however, you do not have enough time to read the full text. In that case, you can use a summary algorithm to generate a summary of this article. The size of this summary can be adjusted: you can transform 50 pages into only 20 pages that contain only the most important parts of the text!
Based on this, this course presents the theory and mainly the practical implementation of three text summarization algorithms: (i) frequency-based, (ii) distance-based (cosine similarity with Pagerank) and (iii) the famous and classic Luhn algorithm, which was one of the first efforts in this area.
During the lectures, we will implement each of these algorithms step by step using modern technologies, such as the Python programming language, the NLTK (Natural Language Toolkit) and spaCy libraries and Google Colab, which will ensure that you will have no problems with installations or configurations of software on your local machine.
In addition to implementing the algorithms, you will also learn how to extract news from blogs and the feeds, as well as generate interesting views of the summaries using HTML! After implementing the algorithms from scratch, you have an additional module in which you can use specific libraries to summarize documents, such as: sumy, pysummarization and BERT summarizer.
At the end of the course, you will know everything you need to create your own summary algorithms! If you have never heard about text summarization, this course is for you! On the other hand, if you are already experienced, you can use this course to review the concepts.
Who this course is for:
- People interested in natural language processing and text summarization
- People interested in the spaCy and NLTK libraries
- Students who are studying subjects related to Artificial Intelligence
- Data Scientists who want to increase their knowledge in natural language processing
- Professionals interested in developing text summarization solutions
- Beginners who are starting to learn natural language processing