Text Summarization of an Article Extracted from Wikipedia Using NLTK Library

2021 
Internet is an excellent source of information, where you can get information on all the topics. But due to the large availability of content, it becomes a challenging job to extract exact information. A text summarization system’s main objective is defining and presentingthe most relevant information from the given text to the end-users. Nowadays, the data is available in a considerable quantity. It becomes difficult for the user to deal with exact information. It’s not possible to read all the information and make a conclusion for specific data. With text summarization, a significantcontent of data is converted into a short set of information. If we talk specifically about Wikipedia’s information, almost all the areas areopen on this website. If we search for a specific keyword, it will provide huge details. Text summarization will convey the same extensive information by converting it into small pieces without losing its message. We have taken data from Wikipedia and applied summarization techniques to reduce the content without changing its meaning for demonstrating the concept during this research. Abstractive methods generate an internal semantic representation of the original content. For several years, text summarization has been an active area of study. By summarizing parts of the source document, abstraction can transform the removed content to condense at more strongly than extraction.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    11
    References
    0
    Citations
    NaN
    KQI
    []