Class ScrapeArticles


  • public class ScrapeArticles
    extends java.lang.Object
    This class runs the primary iteration-loop for downloading news-articles using a list of article-URL's.

    News-Site Scrape: User's Main A.P.I. Class

    Once a list of News-Article URL's have been extracted from the Web-Site using class ScrapeURLs, the content of each of those News-Stories may be retrieved from the site, using this class, and saved to disk.

    This class simply uses the Java HTML JAR Library Scraper Class to retrieve the HTML, and parse into HTML-Vector's. These HTML Pages are saved in using standard Java Object Serialization (java.io.Serializable), to a directory of your choice.

    If you would like, these Serialize-Object HTML-Vector's are easily converted to stanard HTML-Files using class ToHTML



    This class simply runs a download on each article URL that is passed to it. It provides a simple mechanism for storing and saving the articles that it finds to the file-system.

    Example:
    // This builds an "Article Getter"  Each news-article on the web-site is wrapped in a 
    // <DIV CLASS="content ..."> HTML Divider Element.  This is how to retrieve the article-body.
    
    ArticleGet getter = ArticleGet.usual("div", "class", TextComparitor.EQ, "content");
    
    // Save the state of the download, just in case.  Use the standardized "File System Pause" class
    // by calling the factory-builder method 'getFSInstance' - and provide a simple file-name where
    // the state may be saved.  The file will be under 1 kb.
    
    Pause pause = Pause.getFSInstance("state.dat");
    
    // Load the already downloaded news web-site article URL's retrieved from
    Vector<Vector<String>> articleURLs = (Vector<Vector<String>>)
        FileRW.readObjectFromFileNOCNFE("urls.vdat", Vector.class, true);
    
    // Use the standard factory provided "ScrapedArticleReceiver"  This method will return
    // a receiver that sends data-files to the file-system directory 'chineseNewsBoard/' on the
    // local file-system.
    
    ScrapedArticleReceiver receiver = ScrapedArticleReceiver.saveToFS("chineseNewsBoard/");
    
    // Make sure to call initialize, and then start the article downloading process.
    pause.initialize();
    ScrapeArticles.download(receiver, articleURLs, getter, true, null, false, pause, System.out);
    



    Stateless Class:
    This class neither contains any program-state, nor can it be instantiated. The @StaticFunctional Annotation may also be called 'The Spaghetti Report'. Static-Functional classes are, essentially, C-Styled Files, without any constructors or non-static member fields. It is a concept very similar to the Java-Bean's @Stateless Annotation.
    • 1 Constructor(s), 1 declared private, zero-argument constructor
    • 1 Method(s), 1 declared static
    • 1 Field(s), 1 declared static, 1 declared final


    • Method Summary

       
      Download Articles with an Article URL List & ArticleGet
      Modifier and Type Method
      static Vector<Vector<DownloadResult>> download​(ScrapedArticleReceiver articleReceiver, Vector<Vector<String>> articleURLs, ArticleGet articleGetter, boolean skipArticlesWithoutPhotos, StrFilter bannerAndAdFinder, boolean keepOriginalPageHTML, Pause pause, Appendable log)
      • Methods inherited from class java.lang.Object

        clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
    • Method Detail

      • download

         
        public static java.util.Vector<java.util.Vector<DownloadResult>> download​
                    (ScrapedArticleReceiver articleReceiver,
                     java.util.Vector<java.util.Vector<java.lang.String>> articleURLs,
                     ArticleGet articleGetter,
                     boolean skipArticlesWithoutPhotos,
                     StrFilter bannerAndAdFinder,
                     boolean keepOriginalPageHTML,
                     Pause pause,
                     java.lang.Appendable log)
                throws PauseException,
                       ReceiveException,
                       java.io.IOException
        
        This is used to do the downloading of newspaper articles.
        Parameters:
        articleReceiver - This is an instance of ScrapedArticleReceiver. Whenever an Article has successfully downloaded, it will be passed to this 'receiver' class. There is a pre-written, standard ScrapedArticleReceiver that writes to a directory on the file-system as Article's are downloaded. If there is a need to transmit downloaded Article's elsewhere, implement that interface, and provide an instance of it to this parameter.
        articleURLs - this is a parameter that should have been generated by a call to method: ScrapeURLs.getArticleURLs(...)
        articleGetter - This is basically a "Post-Processor" for HTML Web-based newspaper articles. This parameter cannot be null. It is just a simple, one-line, lambda-predicate which needs to be implemented by the programmer. Internet news websites (such as: news.yahoo.com, cnn.com, and gov.cn) have News-Articles on pages that contain a lot of extraneous and advertising links and content. This parameter needs to extract the Article-body content from the rest of the page. This is usually very trivial, but it is also mandatory. Read about the class ArticleGet for more information about extracting the news-content from a Newspaper Article web-page.
        skipArticlesWithoutPhotos - This may be TRUE, and if it is - articles that contain only textual content will be skipped. This can be useful for foreign-news sources where the reader is usually working-harder to understand the content in the first place. This class is primarily used with foreign-news content websites. As such, staring at pages of Mandarin Chinese or Spanish is usually a lot easier if there is at least one photo on the page. This parameter allows users to skip highly dense articles that do not contain at least one picture.
        bannerAndAdFinder - This parameter may be null, but if it is not, it will be used to skip banner-advertisement images. This parameter, in reality, does very little. It will not actually be used to eliminated advertising images - but rather only to identify when an image is a banner, advertisement, or spurious picture. Since this is a news web-site scraping Java Package, there is a part that allows a user to require that only news paper articles that contain a photo be downloaded - and the real purpose of including the 'bannerAndAdFinder' is to allow the scrape mechanism to 'skip' articles whose only photos are advertisements.

        NOTE: Again, the primary impetus for developing these tools was for scraping and translating news articles from foreign countries like Spain, China, and parts of South America. It could be used for any news-source desired. When reading foreign language text - it helps "a little bit more" to see a picture. This parameter is solely used for that purpose.

        PRODUCT ADVERTISEMENTS & FACEBOOK / TWITTER LINKS: Removing actual links about "pinning to Reddit.com" or "Tweeting" articles can be done using either:

        • ArticleGet - Writing an instance of ArticleGet that NOT ONLY extracts the body of a newspaper-article, BUT ALSO performs HTML cleanup using the 'Remove' method of the NodeSearch Package.
        • HTMLModifier - Writing a "cleaner" version of the HTMLModifier lambda expression / Function Interface can also use the NodeSearch classes for removing annoying commercials - or buttons about "Sharing a link on Facebook." The class ToHTML provides a window for accepting an instance of HTMLModifier when converting the generated serialized-data HTML Vector's into '.html' index files.
        keepOriginalPageHTML - When this is TRUE, the original page html will be stored in the result set. When this is FALSE null shall be stored in place of the original page data.

        NOTE: The original page HTML is the source HTML that is fed into the ArticleGet lambda. It contains the "pre-processed HTML."
        pause - If there are many / numerous articles to download, pass an instance of class Pause, and intermediate progress can be saved, and reloaded at a later time.
        log - This parameter may not be null, or a NullPointerException shall throw. As articles are downloaded, notices shall be posted to this 'log' by this method. This parameter expects an implementation of Java's interface java.lang.Appendable which allows for a wide range of options when logging intermediate messages.
        Class or Interface InstanceUse & Purpose
        'System.out'Sends text to the standard-out terminal
        Torello.Java.StorageWriterSends text to System.out, and saves it, internally.
        FileWriter, PrintWriter, StringWriterGeneral purpose java text-output classes
        FileOutputStream, PrintStreamMore general-purpose java text-output classes

        IMPORTANT: The interface Appendable requires that the check exception IOException must be caught when using its append(CharSequence) methods.
        Returns:
        A Vector that is exactly parallel to the input Vector<Vector<String>> articleURLs will be returned. Each element of each of the sub-Vector's in this two-dimensional Vector will have an instance of the enumerated-type 'DownloadResult'. The constant-value in 'DownloadResult' will identify whether or not the Article pointed to by the URL at that Vector-location successfully downloaded.

        If the download failed, then the value of the enum 'DownloadResult' will be able to identify the error that occurred when attempting to scrape a particular news-story URL
        Throws:
        PauseException - If there is an error when attempting to save the download state.
        ReceiveException - If there are any problems with the ScrapedArticleReceiver

        NOTE: A ReceiveException implies that the user's code has failed to properly handle or save an instance of Article that has downloaded, successfully, by this class ScrapeArticles. A ReceiveException will halt the download process immediately, and download state will be saved if the user has provided a reference to the Pause parameter.

        NOTE: Other internally caused download-exceptions will be handled and logged (without halting the entire download-process) - and downloading will continue. A note about the internally-produced exception will be printed to the log, and an appropriate instance of enum DownloadResult will be put in the return Vector.
        java.io.IOException - This exception is required for any method that uses Java's interface java.lang.Appendable. Here, the 'Appendable' is the log, and if writing to this user provided 'log' produces an exception, then download progress will halt immediately, and download state will be saved if the user has provided a reference to the Pause parameter.
        Code:
        Exact Method Body:
         log.append(
             "\n" + BRED + STARS + STARS +
             RESET + " Downloading Articles" + BRED + "\n" +
             STARS + STARS + RESET + '\n'
         );
        
         // The loop variables, and the return-result Vector.
         int                             outerCounter    = 0;
         int                             innerCounter    = 0;
         int                             successCounter  = 0;
         boolean                         firstIteration  = true;
         Vector<Vector<DownloadResult>>  ret             = null;
         URL                             url             = null;
         Runtime                         rt              = Runtime.getRuntime();
        
         // If the user has passed an instance of 'pause' then it should be loaded from disk.
         if (pause != null)
         {
             Ret4<Vector<Vector<DownloadResult>>, Integer, Integer, Integer> r = pause.loadState();
        
             ret             = r.a;
             outerCounter    = r.b.intValue();
             innerCounter    = r.c.intValue();
             successCounter  = r.d.intValue();
         }
        
         // If the user did not provide a "Pause" mechanism, **OR** the "Pause Mechanism" asserts
         // that the download process is starting from the beginning of the article-URL Vector,
         // THEN a *new vector* should be built.
         if (    (pause == null)
             ||  ((outerCounter == 0) && (innerCounter == 0) && (successCounter == 0))
         )
         {
             // Need to instantiate a brand new return vector.  The downloader is starting over
             // at the beginning of the Article URL list.
        
             ret = new Vector<>(articleURLs.size());
        
             // Initializes the capacity (sizes) of the two-dimensional "Return Vector."
             //
             // NOTE: The return Vector is exactly parallel to the input "articleURLs"
             //       two-dimensional input Vector.
        
             for (int i=0; i < articleURLs.size(); i++) 
                 ret.add(new Vector<DownloadResult>(articleURLs.elementAt(i).size()));
         }
        
         for (; outerCounter < articleURLs.size(); outerCounter++)
         {
             // System.out.println("outerCounter=" + outerCounter + ", innerCounter=" +
             //      innerCounter + ", articleURLs.size()=" + articleURLs.size());
        
             // System.out.println("articleURLs.elementAt(" + outerCounter + ").size()=" +
             //      articleURLs.elementAt(outerCounter).size());
        
             for (   innerCounter = (firstIteration ? innerCounter : 0);
                     innerCounter < articleURLs.elementAt(outerCounter).size();
                     innerCounter++
                 )
        
                 try
                 {
                     firstIteration = false;
                     String urlStr = articleURLs.elementAt(outerCounter).elementAt(innerCounter);
        
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
                     // Instantiate the URL object from the URLStr String.
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
        
                     // Should never happen, because each URL will have been alredy tested 
                     // and instantiated in the previous method.
        
                     try
                         { url = new URL(urlStr); }
        
                     catch (Exception e)
                     {
                         log.append
                             ("Could not instantiate URL-String into URL for [" + urlStr + "].\n");
        
                         ret.elementAt(outerCounter).add(DownloadResult.BAD_ARTICLE_URL);
                         continue;
                     }
        
        
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
                     // Run the Garbage Collector, Print Article URL and Number to log.
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
        
                     rt.gc();
                     String              freeMem         = StringParse.commas(rt.freeMemory());
                     String              totalMem        = StringParse.commas(rt.totalMemory());
        
                     log.append(
                         "\nVisiting URL: [" +
                         YELLOW +  StringParse.zeroPad10e4(outerCounter) + RESET + 
                         " of " + StringParse.zeroPad10e4(articleURLs.size()) + ", " +
                         YELLOW +  StringParse.zeroPad10e4(innerCounter) + RESET + 
                         " of " + StringParse.zeroPad10e4
                             (articleURLs.elementAt(outerCounter).size()) + "] " +
                         CYAN         + " - "  + url                       + RESET + '\n' +
                         "Available Memory: "    + YELLOW +  freeMem       + RESET + '\t' +
                         "Total Memory: "        + YELLOW +  totalMem      + RESET + '\n'
                     );
        
        
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
                     // Scrape the web-page
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
        
                     int                 retryCount      = 0;
                     Vector<HTMLNode>    page            = null;
        
                     while ((page == null) && (retryCount < 5))
        
                         try
                             { page = HTMLPageMWT.getPageTokens(15, TimeUnit.SECONDS, url, false); }
            
                         catch (Exception e)
                         {
                             log.append(HTTPCodes.convertMessageVerbose(e, url, 1) + '\n');
                             retryCount++;
                         }
        
        
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
                     // Verify the results of scraping the web-page
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
        
                     if (page == null)
                     {
                         log.append(
                             BRED + "\tArticle could not download, max 5 retry counts." +
                             RESET + '\n'
                         );
        
                         ret.elementAt(outerCounter).add(DownloadResult.COULD_NOT_DOWNLOAD);
                         continue;
                     }
        
                     if (page.size() == 0)
                     {
                         log.append(
                             BRED + "\tArticle was retrieved, but page-vector was empty" +
                             RESET + '\n'
                         );
        
                         ret.elementAt(outerCounter).add(DownloadResult.EMPTY_PAGE_VECTOR);
                         continue;
                     }
        
                     log.append
                         ("\tPage contains (" + YELLOW + page.size() + RESET + ") HTMLNodes.\n");
        
        
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
                     // Retrieve the <TITLE> element (as a String) from the page - if it has one.
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
        
                     String title = Util.textNodesString(TagNodeGetInclusive.first(page, "title"));
        
                     if (title.length() > 0)
                         log.append
                             ("\tPage <TITLE> element is: " + YELLOW + title + RESET + '\n');
        
                     else
                         log.append("\tPage has no <TITLE> element, or it was empty.\n");
        
        
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
                     // Use the Article-Getter to get the Article-Body.  Watch for Exceptions.
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
        
                     Vector<HTMLNode> article = null;
        
                     // The function-pointer (FunctionInterface) 'articleGetter' is supposed to 
                     // locate and extract the Article's HTML from the surrounding web-page, which
                     // is usually fully-loaded with advertisements, and "See This Also" links.
                     //
                     // All news-websites I have seen wrap the article itself in an HTML <MAIN>
                     // <ARTICLE>, <SECTION role='article'> or a <DIV CLASS='main'> tag
                     // that is very easy to find.  Also, these tags differ from site-to-site, each
                     // site will use the same tag for all of its articles.
                     //
                     // (But you have to look at the HTML first)
        
                     try
                         { article = articleGetter.apply(url, page); }
        
                     catch (ArticleGetException e)
                     {
                         log.append(
                             BRED + "\tArticleGet.apply(...) failed: " + e.getMessage() +
                             RESET + "\nException Cause Chain:\n" + EXCC.toString(e) + '\n'
                         );
        
                         ret.elementAt(outerCounter).add(DownloadResult.ARTICLE_GET_EXCEPTION);
                         continue;
                     }
        
        
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
                     // Verify the results of article get, and choose the right DownloadResult
                     // Enumerated-Constant if the download failed
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
        
                     if (article == null)
                     {
                         log.append(
                             BRED + "\tContent-body not found by ArticleGet.apply(...)\n" +
                             RESET
                         );
        
                         ret.elementAt(outerCounter).add(DownloadResult.ARTICLE_GET_RETURNED_NULL);
                         continue;
                     }
        
                     if (article.size() == 0)
                     {
                         log.append(
                             BRED + "\tContent-body not found by ArticleGet.apply(...)\n" +
                             RESET
                         );
        
                         ret.elementAt(outerCounter)
                             .add(DownloadResult.ARTICLE_GET_RETURNED_EMPTY_VECTOR);
                         continue;
                     }
        
                     log.append(
                         "\tArticle body contains (" + YELLOW + article.size() + RESET +
                         ") HTMLNodes.\n"
                     );
        
        
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
                     // Retrieve the positions of the images
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
        
                     // The Vector-index location of all the images inside the article-body
                     int[] imagePosArr = InnerTagFind.all(article, "img", "src",
                         (String src) -> ! StrCmpr.startsWithXOR_CI(src.trim(), "data:"));
        
                     // A list of all the image-URL's that were extracted from the article-body
                     // using the integer-array aquired in the previous line.
                     Vector<URL> imageURLs = Links.resolveSRCs(article, imagePosArr, url);
        
                     if (skipArticlesWithoutPhotos && (imageURLs.size() == 0))
                     {
                         log.append(
                             BRED + "\tArticle content contained 0 HTML IMG elements" + RESET +
                             '\n'
                         );
        
                         ret.elementAt(outerCounter).add(DownloadResult.NO_IMAGES_FOUND);
                         continue;
                     }
        
                     log.append(
                         "\tArticle contains (" + YELLOW + imageURLs.size() + RESET + ") " +
                         "image TagNodes.\n"
                     );
        
        
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
                     // Check the banner-situation.  Count all images, and less that number by the
                     // number of "banner-images"
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
        
                     // IMPORTANT NOTE: THIS ISN'T ALWAYS USEFUL OR USEABLE...  IT IS **SOMETIMES**
                     // USEFUL
        
                     int imageCount = imageURLs.size();
        
                     if (bannerAndAdFinder != null)
        
                         for (int pos : imagePosArr)
        
                             if (bannerAndAdFinder
                                 .test(((TagNode) article.elementAt(pos)).AV("src"))
                             )
                                 imageCount--;
        
                     if (skipArticlesWithoutPhotos && (imageCount == 0))
                     {
                         log.append(
                             BRED + "\tAll images inside article were banner images" +
                             RESET + '\n'
                         );
        
                         ret.elementAt(outerCounter)
                             .add(DownloadResult.NO_IMAGES_FOUND_ONLY_BANNERS);
        
                         continue;
                     }
        
                     if (bannerAndAdFinder != null)
        
                         log.append(
                             "\tArticle contains (" + YELLOW + imageCount + RESET + ") " +
                             "non-banner image TagNodes.\n"
                         );
        
        
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
                     // Write the results to the output file
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
        
                     Article articleResult = new Article(
                         url, title, (keepOriginalPageHTML ? page : null), article, imageURLs,
                         imagePosArr
                     );
        
                     // The article was successfully downloaded and parsed.  Send it to the
                     // "Receiver" and add DownloadResult to the return vector.
        
                     log.append(
                         GREEN + "ARTICLE LOADED." + RESET +
                         "  Sending to ScrapedArticleReceiver.\n"
                     );
        
                     articleReceiver.receive(articleResult, outerCounter, innerCounter);
                     ret.elementAt(outerCounter).add(DownloadResult.SUCCESS);
        
                     successCounter++;
        
                 }
                 catch (ReceiveException re)
                 {
                     // NOTE: If there was a "ReceiveException" then article-downloading must be
                     //       halted immediately.  A ReceiveException implies that the user did not
                     //       properly handle the downloaded Article, and the user's code would have
                     //       to be debugged.
        
                     log.append(
                         "There was an error when attempting to pass the downloaded article to " +
                         "the ArticleReceiver.  Unrecoverable.  Saving download state, and " +
                         "halting download.\n"
                     );
        
                     // Make sure to save the internal download state                        
                     if (pause != null)
                         pause.saveState(ret, outerCounter, innerCounter, successCounter);
        
                     // Make sure to stop the download process now.  If the article "Receiver"
                     // failed to save or store a received-article, there is NO POINT IN CONTINUING
                     // THE DOWNLOADER.
                     //
                     // NOTE: This will cause the method to exit with error, make sure to stop the
                     //       "MWT Thread" Remember, this is just a simple "Monitor Thread" that 
                     //       prevents a download from hanging.
        
                     HTMLPageMWT.shutdownMWTThreads();
        
                     throw re;
                 }
                 catch (IOException ioe)
                 {
                     // This exception occurs if writing the "Appendable" (the log) fails.  If this
                     // happens, download should halt immediately, and the internal-state should be
                     // saved to the 'pause' variable.
        
                     if (pause != null)
                         pause.saveState(ret, outerCounter, innerCounter, successCounter);
        
                     // Need to stop the download process.  IOException could ONLY BE the result of
                     // the "Appendable.append" method.  None of the other stuff throws IOException.
                     //
                     // ALSO: If the "Appendable" never fails (which is 99% likely not to happen),
                     // This catch-statement will never actually execute.  However, if Appendable
                     // did, in fact, fail to write - then downloading cannot continue;
                     //
                     // NOTE: This will cause the method to exit with error, make sure to stop the
                     //       HTMLPage's "MWT Thread" (It is a simple "Monitor Thread") that 
                     //       can be used to prevent the download from hanging.
                     //       HOWEVER, it will also cause the JVM to 'hang' this thread exits
                     //       without shutting down the monitor-thread!
        
                     HTMLPageMWT.shutdownMWTThreads();
        
                     throw ioe;
                 }
                 catch (Exception e)
                 {
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
                     // Handle "Unknown Exception" case.
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
          
                     log.append(
                         "There was an unknown Exception:\n" + EXCC.toString(e) +
                         "\nSkipping URL: " + url + '\n'
                     );
        
                     ret.elementAt(outerCounter).add(DownloadResult.UNKNOWN_EXCEPTION);
                 }
                 finally
                 {
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
                     // Write the current "READ STATE" information (two integers)
                     // *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** ***
        
                     // This makes sure that the download-progress is not lost when large numbers
                     // of articles are being processed.  Restart the download, and the loop
                     // variables will automatically be initialized to where they were before the
                     // JVM exited.  (Pretty Useful)
        
                     if (pause != null)
                         pause.saveState(ret, outerCounter, innerCounter, successCounter);
                 }
         }
        
         log.append(
             BRED + STARS + RESET +
             "Traversing Site Completed.\n" +
             "Loaded a total of (" + successCounter + ") articles.\n"
         );
        
         // Returns the two-dimensional "Download Result" Vector
         // Make sure to stop the "Max Wait Time Threads"
        
         HTMLPageMWT.shutdownMWTThreads();
        
         return ret;