Home arrow Perl Programming arrow Page 7 - Web Mining with Perl

Conclusion - Perl

It is common knowledge that the Internet is a great data source. It is alsocommon knowledge that it is difficult to get the information you want in the format you need. No longer.

TABLE OF CONTENTS:
  1. Web Mining with Perl
  2. Accessing The Net (LWP)
  3. Cut Along The Table Lines (HTML::TableExtract)
  4. Learning From Links (HTML::LinkExtor)
  5. Checking For Sameness (String::CRC)
  6. Bringing It All Together
  7. Conclusion
By: Tommie Jones
Rating: starstarstarstarstar / 54
March 05, 2002

print this article
SEARCH DEV SHED

TOOLS YOU CAN USE

advertisement
These are but a few of the modules that you will find useful in building a Web Crawler. Other functionality available include; Text Processing: Lingua::Wordnet, Lingua::LinkParser XML: Xml::Parser, XML::Xalan, XML::RSS EMail:: MailTools bundle News Groups: News::Scan, News::Article just to name a few.

The flexibility of Perl and it's rich set of available modules is a perfect tool for developing web crawlers.

 
 
>>> More Perl Programming Articles          >>> More By Tommie Jones
 

blog comments powered by Disqus
escort Bursa Bursa escort Antalya eskort
   

PERL PROGRAMMING ARTICLES

- Perl Turns 25
- Lists and Arguments in Perl
- Variables and Arguments in Perl
- Understanding Scope and Packages in Perl
- Arguments and Return Values in Perl
- Invoking Perl Subroutines and Functions
- Subroutines and Functions in Perl
- Perl Basics: Writing and Debugging Programs
- Structure and Statements in Perl
- First Steps in Perl
- Completing Regular Expression Basics
- Modifiers, Boundaries, and Regular Expressio...
- Quantifiers and Other Regular Expression Bas...
- Parsing and Regular Expression Basics
- Hash Functions

Developer Shed Affiliates

 


Dev Shed Tutorial Topics: