Class ScrapeURLs


  • public class ScrapeURLs
    extends java.lang.Object
    Collects all news-article URL's from a news oriented web-site's main web-page and from the list 'sub-section' web-pages.

    News-Site Scrape: User-Main A.P.I. Class

    This class will scour a News or Information Web-Site for all relevant Article URL's, and save those URL's to Vector. Once completed, a complete list of Article-URL's may be returned to the user for subsequent downloading of the Article's HTML-content.

    Once the URL's have been collected, class ScrapeArticles may be used to retrieve the contents of each of the pages for those URL's.



    This HTML Search, Parse and Scrape package was initially written to help download and translate news-articles from web-sites that appear to be from overseas and across the oceans.

    The purpose of this class is to scrape the relevant news-paper articles from an Internet News Web-Site. These article URL's are returned inside of a "Vector of Vector's" As should be obvious, most news-based web-sites on the Internet have, since their founding, divided different news-articles into separate "sub-sections." Such sections often include "News", "World News", "Life", "Sports", "Finance" etc...

    Generally, searching through only the "top-level" news-site web-page is not enough to retrieve all articles available on the page for any given day of the week. The primary purpose of this class is to visit each page on a user-provided "Section's List", and identify each and every Article-URL available on each of these sub-sections (and return those lists to the programmer).

    The "Vector of Vector's" that is returned by this class' "get" methods will return all identified News Article URL's in each sub-section of any news web-site, assuming the appropriate "Getters" have been provided. This list of sub-sections (which have been described here) are expected to be provided to the "get" method, when invoking it, by passing a list of sections to the parameter "sectionURLs".

    In addition to a list of sub-sections, the user should also specify an instance of URLFilter. This filter helps inform the scraper which URL's to ignore, and which to keep. In most of the news-sites that have been tested with this package, any non-advertising "related article URL's" always seem to have a very specific pattern that any plain-old regular-expression could identify, easily.

    This package has a small Lambda-Target (Function-Pointer) class called LinksGet that lets you use any number of very common and very simple mechanisms for identifying (as in a 'PASS' / 'FAIL') which URL's are, indeed, URL's for an actual News-Article. This allows the programmer to skip over swarths of advertisment, photo-journal, and any number of irrelevant link-pages.

    Perhaps the user may wonder what work this class is actually doing if it is necessary to provided instances of URLFilter and a Vector 'sectionURLs' - ... and the answer is not a lot! This class is actually very short, and just ensures that as much error checking as possible is done, and that the returned Vector has been checked for validity!


    REMEMBER:
    Building an instance of LinksGet should require nothing more than perusing the HTML on the sections of your site, and checking out what features each of the actual article-URL's have in common.

    Here is an example "URL Retrieve" operation on the Mandarin Chinese Language Government Web Portal available in North America. Translating these pages for study about the politics and technology from the other side of the Pacific Ocean was the primary impetus for developing the Java-HTML JAR Library.

    Example:
    // Sample Article URL from the Chinese National Web-Portal - all valid articles have the
    // basic pattern
    // http://www.gov.cn/xinwen/2020-07/17/content_5527889.htm
    //
    // This "Regular Expression" will match any News Article URL that "looks like" the above URL.
    // This Regular-Expression can be passed to class URLFilter.
    
    String    articleURLRegExStr  = "http://www.gov.cn/xinwen/\\d\\d\\d\\d-\\d\\d/\\d\\d/content_\\d+.html?";
    Pattern   articleURLsRegEx    = Pattern.compile(articleURLRegExStr);
    URLFilter filter              = URLFilter.fromStrFilter(StrFilter.regExKEEP(articleURLsRegEx, true));
    
    // This will hold the list of "Main Page Sub-Sections".  In this example, we will only look at 
    // Articles on the "First Page", and the rest of the News-Papers Sub-Sections will be ignored.
    //
    // For the purposes of this example, only one section of the 'www.Gov.CN/' web-portal will be
    // visited.  There are other "Newspaper SubSections" that could easily be added to this Vector.
    // If more sections were added, more news-article URL's would likely be found, identified and 
    // returned.
    
    Vector<URL> sectionURLs = new Vector<>(1);
    sectionURLs.add(new URL("https://www.gov.cn/"));
    
    // Run the Article URL scraper.  In this example, the 'filter' (a URLFilter) is enough for
    // getting the ArticleURL's.  'null' is passed to the LinksGet parameter.
    
    Vector<Vector<String>> articleURLs = ScrapeURLs.get(sectionURLs, filter, null, System.out);
    
    // This will write every article URL to a text file called "urls.txt".
    //
    // NOTE: Since only one Sub-Section was added in this example, there is no need to write out 
    //       the entire "Vector of Vectors", but rather just the first (and only) element's contents
    
    FileRW.writeFile(articleURLs.elementAt(0), "urls.txt");
    
    // This will write the article-URL's vector to a serialized-object data-file called "urls.vdat"
    FileRW.writeObjectToFile(articleURLs, "urls.vdat", true);
    
    // AT THIS POINT, YOU SHOULD BE READY TO RUN THE ARTICLE-SCRAPER CLASS
    // ...
    


    NOTE:
    The 'urls.vdat' file that was created can easily be retrieved using Java's de-serialization streams. If the cast (below) were necessary, then an annotation of the format @SuppressWarnings("unchecked") would be required.

    Using Java's Serialization and De-Serailization Mechanism for saving temporary results to disk is extremely easy in Java-HTML.

    Java Line of Code:
    Vector<URL> urls = (Vector<URL>) FileRW.readObjectFromFile("urls.vdat", Vector.class, true);
    



    Stateless Class:
    This class neither contains any program-state, nor can it be instantiated. The @StaticFunctional Annotation may also be called 'The Spaghetti Report'. Static-Functional classes are, essentially, C-Styled Files, without any constructors or non-static member fields. It is a concept very similar to the Java-Bean's @Stateless Annotation.

    • 1 Constructor(s), 1 declared private, zero-argument constructor
    • 3 Method(s), 3 declared static
    • 1 Field(s), 1 declared static, 0 declared final
    • Fields excused from final modifier (with explanation):
      Field 'SKIP_ON_SECTION_URL_EXCEPTION' is not final. Reason: CONFIGURATION


    • Method Summary

       
      Retrieve Articles URL's using a LinksGet and Optional Filter
      Modifier and Type Method
      static Vector<Vector<String>> get​(Vector<URL> sectionURLs, URLFilter articleURLFilter, LinksGet linksGetter, Appendable log)
      static Vector<Vector<String>> get​(NewsSite ns, Appendable log)
      • Methods inherited from class java.lang.Object

        clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
    • Field Detail

      • SKIP_ON_SECTION_URL_EXCEPTION

        🡇     🗕  🗗  🗖
        public static boolean SKIP_ON_SECTION_URL_EXCEPTION
        This is a static boolean configuration field. When this is set to TRUE, if one of the "Section URL's" provided to this class is not valid, and generates a 404 FileNotFoundException, or some other HttpConnection exception, those exceptions will simply be logged, and quietly ignored.

        When this flag is set to FALSE, any problems that can occur when attempting to pick out News Article URL's from a Section Web-Page will cause a SectionURLException to throw, and the ScrapeURL's process will halt.

        SIMPLY PUT: There are occasions when a news web-site will remove a section such as "Commerce", "Sports", or "Travel" - and when or if one of these suddenly goes missing, it is better to just skip the site rather than halting the scrape, keep this flag set to TRUE.

        ALSO: This is, indeed, a public and static flag (field) which does mean that all processes (Thread's) using class ScrapeURLs must share the same setting (simultaneously). This particular flag CANNOT be changed in a Thread-Safe manner.
    • Method Detail

      • get

        🡅     🗕  🗗  🗖
        public static java.util.Vector<java.util.Vector<java.lang.String>> get​
                    (java.util.Vector<java.net.URL> sectionURLs,
                     URLFilter articleURLFilter,
                     LinksGet linksGetter,
                     java.lang.Appendable log)
        
        This class is used to retrieve all of the available article URL links found on all sections of a newspaper website.
        Parameters:
        sectionURLs - This should be a vector of URL's, that has all of the the "Main News-Paper Page Sections." Typical NewsPaper Sections are things like: Life, Sports, Business, World, Economy, Arts, etc... This parameter may not be null, or a NullPointerException will throw.
        articleURLFilter - If there is a standard pattern for a URL that must be avoided, then this filter parameter should be used. This parameter may be null, and if it is, it shall be ignored. This Java URL-Predicate (an instance of Predicate<URL>) should return TRUE if a particular URL needs to be kept, not filtered. When this Predicate evaluates to FALSE - the URL will be filtered.

        NOTE: This behavior is identical to the Java Stream's method "filter(Predicate<>)".

        ALSO: URL's that are filtered will neither be scraped, nor saved, into the newspaper article result-set output file.
        linksGetter - This method may be used to retrieve all links on a particular section-page. This parameter may be null. If it is null, it will be ignored - and all HTML Anchor (<A HREF=...>) links will be considered "Newspaper Articles to be scraped." Be careful about ignoring this parameter, because there may be many extraneous non-news-article links on a particular Internet News WebSite or inside a Web-Page Section.
        log - This prints log information to the screen. This parameter may not be null, or a NullPointerException will throw. This expects an implementation of Java's java.lang.Appendable interface which allows for a wide range of options when logging intermediate messages.
        Class or Interface InstanceUse & Purpose
        'System.out' Sends text to the standard-out terminal
        Torello.Java.StorageWriter Sends text to System.out, and saves it, internally.
        FileWriter, PrintWriter, StringWriter General purpose java text-output classes
        FileOutputStream, PrintStream More general-purpose java text-output classes

        Checked IOException:
        The Appendable interface requires that the Checked-Exception IOException be caught when using its append(...) methods.
        Returns:
        The "Vector of Vector's" that is returned is simply a list of all newspaper anchor-link URL's found on each Newspaper Sub-Section URL passed to the 'sectionURLs' parameter. The returned "Vector of Vector's" is parallel to the input-parameter Vector<URL> Section-URL's.

        What this means is that the Newspaper-Article URL-Links scraped from the page located at sectionURLs.elementAt(0) - will be stored in the return-Vector at ret.elementAt(0).

        The article URL's scraped off of page URL from sectionURLs.elementAt(1) will be stored in the return-Vector at ret.elementAt(1). And so on, and so forth...
        Throws:
        SectionURLException - If one of the provided sectionURL's (Life, Sports, Travel, etc...) is not valid, or not available on the page then this exception will throw. Note, though, however there is a flag (SKIP_ON_SECTION_URL_EXCEPTION) that will force this method to simply "skip" a faulty or non-available Section URL, and move on to the next news-article section.

        By default, this flag is set to TRUE, meaning that this method will skip news-paper sections that have been temporarily removed rather than causing the method to exit. This default behavior can be changed by setting the flag FALSE.