DEV Community

Cover image for Web Scraper & Data Extraction with Python | Upwork Series #1

Web Scraper & Data Extraction with Python | Upwork Series #1

Rashid on January 02, 2020

This post cross-published with OnePublish Welcome to the first post of Upwork Series. In this series, we are going to work on real-world applicati...
Collapse
 
narenandu profile image
Narendra Kumar Vadapalli

Just a couple of minor points, which might make the code look clean and neat.

If you define a method like the following

def return_match_from_info(input_str):
    re.search(input_str, information).group(1).strip()

can become

operating = get_match_from_info('Operating Status:(.*)Out')

Of course you need to declare information as global variable


Also if you want to use pandas pandas.pydata.org/,

Collapse
 
ben profile image
Ben Halpern

This is a really interesting concept for a series!

Collapse
 
thedevtimeline profile image
Rashid

ThanksπŸ™ŒπŸš€

Collapse
 
rpopovwex profile image
rpopovwex • Edited

For some reason, this code gave me AttributeError when a Dot number was not found. I figured out that this was due to bs.find('center') not finding the correct field (since it doesn't exist on the page for non-existent or outdated DoT number). I solved the problem by changing this:

except AttributeError:
      pass

to

except AttributeError:
    continue

so that instead of doing nothing (pass) I'd switch to the next DoT number. I also had to move the whole block of code starting with "information" one tab to the right so that it's only executed when try statement executes without errors. This way only valid DoT numbers are crawled and saved. Hope this helps!

Here's how the code looks in the final form:

def crawl_data(url):
    req = Request(url, headers={'User-Agent': 'Mozilla/5.0'})
    html = urlopen(req).read()
    bs = BeautifulSoup(html, 'html.parser')
    bold_texts = bs.find_all('b')
    for b in bold_texts:
        try:
            date = re.search('The information below reflects the content of the FMCSA management information systems as of(.*).', b.get_text(strip=True, separator='  ')).group(1).strip()
            if len(date) > 11:
                date = date.split(".",1)[0]
            print(date)
        except AttributeError:
            continue

        information = bs.find('center').get_text(strip=True, separator='  ')

        operating = re.search('Operating Status:(.*)Out', information).group(1).strip()
        legal_name = re.search('Legal Name:(.*)DBA', information).group(1).strip()
        physical_address = re.search('Physical Address:(.*)Phone', information).group(1).strip()
        mailing_address = re.search('Mailing Address:(.*)USDOT', information).group(1).strip()
        usdot_address = re.search('USDOT Number:(.*)State Carrier ID Number', information).group(1).strip()
        power_units = re.search('Power Units:(.*)Drivers', information).group(1).strip()
        drivers = re.search('Drivers:(.*)MCS-150 Form Date', information).group(1).strip()

        write_csv(date, operating, legal_name, physical_address, mailing_address, usdot_address, power_units, drivers)
Collapse
 
rpopovwex profile image
rpopovwex

Also, it'd be convenient to add some sort of progress bar that would state which DoT is crawled at the moment and how many are left, as well as a short statement in the case when DoT number is not found.

Collapse
 
juancarlospaco profile image
Juan Carlos

Good post, with examples and explanations, this can be interesting.

I invite you to try the new lib dev.to/juancarlospaco/faster-than-...
πŸ˜ƒ

Collapse
 
thanimeblog profile image
Theanimeblog

I'm looking to hire a dev that can create a web scraper to .csv format that will run daily at 9am est. It can save to a google sheet once run.

Each day a new pdf is published. ie: li-public.fmcsa.dot.gov//lihtml/rp...

only the digits change which correspond to the daily date change.

need to pull the following per row:

MC Number
Company Name
Name
Address
Phone

email to discuss budget, can pay through paypal G&S quickly. Looking for help immediately.

info@dnxint.com is my email

Collapse
 
dpashutskii profile image
Dmitrii Pashutskii

Hi! Great article, thanks for posting.
Just one question, how does this related to Upwork? Just curious as a former freelancer on a platform.

Collapse
 
ilhamday profile image
Ilham Ferry Pratama Yudha

Nice post. Thanks for make this one. The explanation is so clear. Happy to read it .. :)

Collapse
 
cetive profile image
Cetive

2 Issues,

1) The Code is creating seprate CSV file for each line
2) My dots.xls file have 100 Dot number but its only search 2 and end the process