Class ScrapeArticles


  • public class ScrapeArticles
    extends java.lang.Object
    This class runs the primary iteration-loop for downloading news-articles using a list of article-URL's.

    News-Site Scrape: User's Main A.P.I. Class

    Once a list of News-Article URL's have been extracted from the Web-Site using class ScrapeURLs, the content of each of those News-Stories may be retrieved from the site, using this class, and saved to disk.

    This class simply uses the Java HTML JAR Library Scraper Class to retrieve the HTML, and parse into HTML-Vector's. These HTML Pages are saved in using standard Java Object Serialization (java.io.Serializable), to a directory of your choice.

    If you would like, these Serialize-Object HTML-Vector's are easily converted to stanard HTML-Files using class ToHTML



    This class simply runs a download on each article URL that is passed to it. It provides a simple mechanism for storing and saving the articles that it finds to the file-system.

    Example:
    // This builds an "Article Getter"  Each news-article on the web-site is wrapped in a 
    // <DIV CLASS="content ..."> HTML Divider Element.  This is how to retrieve the article-body.
    
    ArticleGet getter = ArticleGet.usual("div", "class", TextComparitor.EQ, "content");
    
    // Save the state of the download, just in case.  Use the standardized "File System Pause" class
    // by calling the factory-builder method 'getFSInstance' - and provide a simple file-name where
    // the state may be saved.  The file will be under 1 kb.
    
    Pause pause = Pause.getFSInstance("state.dat");
    
    // Load the already downloaded news web-site article URL's retrieved from
    Vector<Vector<String>> articleURLs = (Vector<Vector<String>>)
        FileRW.readObjectFromFileNOCNFE("urls.vdat", Vector.class, true);
    
    // Use the standard factory provided "ScrapedArticleReceiver"  This method will return
    // a receiver that sends data-files to the file-system directory 'chineseNewsBoard/' on the
    // local file-system.
    
    ScrapedArticleReceiver receiver = ScrapedArticleReceiver.saveToFS("chineseNewsBoard/");
    
    // Make sure to call initialize, and then start the article downloading process.
    pause.initialize();
    ScrapeArticles.download(receiver, articleURLs, getter, true, null, false, pause, System.out);
    



    Stateless Class:
    This class neither contains any program-state, nor can it be instantiated. The @StaticFunctional Annotation may also be called 'The Spaghetti Report'. Static-Functional classes are, essentially, C-Styled Files, without any constructors or non-static member fields. It is a concept very similar to the Java-Bean's @Stateless Annotation.

    • 1 Constructor(s), 1 declared private, zero-argument constructor
    • 1 Method(s), 1 declared static
    • 1 Field(s), 1 declared static, 1 declared final


    • Method Summary

       
      Download Articles with an Article URL List & ArticleGet
      Modifier and Type Method
      static Vector<Vector<DownloadResult>> download​(ScrapedArticleReceiver articleReceiver, Vector<Vector<String>> articleURLs, ArticleGet articleGetter, boolean skipArticlesWithoutPhotos, StrFilter bannerAndAdFinder, boolean keepOriginalPageHTML, Pause pause, Appendable log)
      • Methods inherited from class java.lang.Object

        clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
    • Method Detail

      • download

          🗕  🗗  🗖
        public static java.util.Vector<java.util.Vector<DownloadResult>> download​
                    (ScrapedArticleReceiver articleReceiver,
                     java.util.Vector<java.util.Vector<java.lang.String>> articleURLs,
                     ArticleGet articleGetter,
                     boolean skipArticlesWithoutPhotos,
                     StrFilter bannerAndAdFinder,
                     boolean keepOriginalPageHTML,
                     Pause pause,
                     java.lang.Appendable log)
                throws PauseException,
                       ReceiveException,
                       java.io.IOException
        
        This is used to do the downloading of newspaper articles.
        Parameters:
        articleReceiver - This is an instance of ScrapedArticleReceiver. Whenever an Article has successfully downloaded, it will be passed to this 'receiver' class. There is a pre-written, standard ScrapedArticleReceiver that writes to a directory on the file-system as Article's are downloaded. If there is a need to transmit downloaded Article's elsewhere, implement that interface, and provide an instance of it to this parameter.
        articleURLs - this is a parameter that should have been generated by a call to method: ScrapeURLs.getArticleURLs(...)
        articleGetter - This is basically a "Post-Processor" for HTML Web-based newspaper articles. This parameter cannot be null. It is just a simple, one-line, lambda-predicate which needs to be implemented by the programmer. Internet news websites (such as: news.yahoo.com, cnn.com, and gov.cn) have News-Articles on pages that contain a lot of extraneous and advertising links and content. This parameter needs to extract the Article-body content from the rest of the page. This is usually very trivial, but it is also mandatory. Read about the class ArticleGet for more information about extracting the news-content from a Newspaper Article web-page.
        skipArticlesWithoutPhotos - This may be TRUE, and if it is - articles that contain only textual content will be skipped. This can be useful for foreign-news sources where the reader is usually working-harder to understand the content in the first place. This class is primarily used with foreign-news content websites. As such, staring at pages of Mandarin Chinese or Spanish is usually a lot easier if there is at least one photo on the page. This parameter allows users to skip highly dense articles that do not contain at least one picture.
        bannerAndAdFinder - This parameter may be null, but if it is not, it will be used to skip banner-advertisement images. This parameter, in reality, does very little. It will not actually be used to eliminated advertising images - but rather only to identify when an image is a banner, advertisement, or spurious picture. Since this is a news web-site scraping Java Package, there is a part that allows a user to require that only news paper articles that contain a photo be downloaded - and the real purpose of including the 'bannerAndAdFinder' is to allow the scrape mechanism to 'skip' articles whose only photos are advertisements.

        NOTE: Again, the primary impetus for developing these tools was for scraping and translating news articles from foreign countries like Spain, China, and parts of South America. It could be used for any news-source desired. When reading foreign language text - it helps "a little bit more" to see a picture. This parameter is solely used for that purpose.

        PRODUCT ADVERTISEMENTS & FACEBOOK / TWITTER LINKS: Removing actual links about "pinning to Reddit.com" or "Tweeting" articles can be done using either:

        • ArticleGet - Writing an instance of ArticleGet that NOT ONLY extracts the body of a newspaper-article, BUT ALSO performs HTML cleanup using the 'Remove' method of the NodeSearch Package.
        • HTMLModifier - Writing a "cleaner" version of the HTMLModifier lambda expression / Function Interface can also use the NodeSearch classes for removing annoying commercials - or buttons about "Sharing a link on Facebook." The class ToHTML provides a window for accepting an instance of HTMLModifier when converting the generated serialized-data HTML Vector's into '.html' index files.
        keepOriginalPageHTML - When this is TRUE, the original page html will be stored in the result set. When this is FALSE null shall be stored in place of the original page data.

        NOTE: The original page HTML is the source HTML that is fed into the ArticleGet lambda. It contains the "pre-processed HTML."
        pause - If there are many / numerous articles to download, pass an instance of class Pause, and intermediate progress can be saved, and reloaded at a later time.
        log - This parameter may not be null, or a NullPointerException shall throw. As articles are downloaded, notices shall be posted to this 'log' by this method. This expects an implementation of Java's java.lang.Appendable interface which allows for a wide range of options when logging intermediate messages.
        Class or Interface InstanceUse & Purpose
        'System.out' Sends text to the standard-out terminal
        Torello.Java.StorageWriter Sends text to System.out, and saves it, internally.
        FileWriter, PrintWriter, StringWriter General purpose java text-output classes
        FileOutputStream, PrintStream More general-purpose java text-output classes

        Checked IOException:
        The Appendable interface requires that the Checked-Exception IOException be caught when using its append(...) methods.
        Returns:
        A Vector that is exactly parallel to the input Vector<Vector<String>> articleURLs will be returned. Each element of each of the sub-Vector's in this two-dimensional Vector will have an instance of the enumerated-type 'DownloadResult'. The constant-value in 'DownloadResult' will identify whether or not the Article pointed to by the URL at that Vector-location successfully downloaded.

        If the download failed, then the value of the enum 'DownloadResult' will be able to identify the error that occurred when attempting to scrape a particular news-story URL
        Throws:
        PauseException - If there is an error when attempting to save the download state.
        ReceiveException - If there are any problems with the ScrapedArticleReceiver

        NOTE: A ReceiveException implies that the user's code has failed to properly handle or save an instance of Article that has downloaded, successfully, by this class ScrapeArticles. A ReceiveException will halt the download process immediately, and download state will be saved if the user has provided a reference to the Pause parameter.

        NOTE: Other internally caused download-exceptions will be handled and logged (without halting the entire download-process) - and downloading will continue. A note about the internally-produced exception will be printed to the log, and an appropriate instance of enum DownloadResult will be put in the return Vector.
        java.io.IOException - This exception is required for any method that uses Java's interface java.lang.Appendable. Here, the 'Appendable' is the log, and if writing to this user provided 'log' produces an exception, then download progress will halt immediately, and download state will be saved if the user has provided a reference to the Pause parameter.