Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

R scraping a table on https site

Tags:

r

I am trying to scrape the report log table from the following website "https://www.heritageunits.com/Locomotive/Detail/NS8098" using the RCurl package with the attached code. It pulls in elements from the page, but when I scroll through the 10 items in the list stored under "page", none of the elements include the table.

library("RCurl")
# Read page
page <- GET(
  url="https://heritageunits.com/Locomotive/Detail/NS8098",
  config(cainfo = cafile), ssl.verifyhost = FALSE
)

I would also like to scrape the data from the tables on this page when you toggle to the reports from the previous days, but am not sure how to code this in R to select any previous report pages. Any help would be appreciated. Thanks.

like image 638
mah271 Avatar asked Feb 02 '26 16:02

mah271


2 Answers

Occasionally I am able to find a json file in the source that you can it directly but I couldn't find one. I went with RSelenium and had it click the next button and cycle through. This method is frail so you have to pay attention when you run it. If the datatable is not fully loaded it will duplicate that last page so I used a small Sys.sleep to make sure that it waited long enough. I would recommend checking for duplicate rows at the end to catch this. Again it is frail but it works

library(RSelenium)
library(XML)
library(foreach)


# Start Selenium server
checkForServer()
startServer()

remDr <- 
  remoteDriver(
    remoteServerAddr = "localhost" 
    , port = 4444
    , browserName = "chrome"
)

remDr$open()

# Navigate to page
remDr$navigate("https://www.heritageunits.com/Locomotive/Detail/NS8098")

# Snag the html
outhtml <- remDr$findElement(using = 'xpath', "//*")
out<-outhtml$getElementAttribute("outerHTML")[[1]]

# Parse with RCurl
doc<-htmlParse(out, encoding = "UTF-8")

# get the last page so we can cycle through
PageNodes <- getNodeSet(doc, '//*[(@id = "history_paginate")]')
Pages <- sapply(X = PageNodes, FUN = xmlValue)
LastPage = as.numeric(gsub('Previous12345\\…(.*)Next', '\\1',Pages))


# loop through one click at a time
Locomotive <- foreach(i = 1:(LastPage-1), .combine = 'rbind', .verbose = TRUE) %do% {

  if(i == 1){

    readHTMLTable(doc)$history

  } else {

    nextpage <- remDr$findElement("css selector", '#history_next')
    nextpage$sendKeysToElement(list(key ="enter"))

    # Take it slow so it gets each page
    Sys.sleep(.50)

    outhtml <- remDr$findElement(using = 'xpath', "//*")
    out<-outhtml$getElementAttribute("outerHTML")[[1]]

    # Parse with RCurl
    doc<-htmlParse(out, encoding = "UTF-8")
    readHTMLTable(doc)$history
  }


}
like image 138
JackStat Avatar answered Feb 05 '26 08:02

JackStat


Missed by a few minutes. I took the RSelenium snippet found on another question and altered to suit. I think this one's a little shorter though. I didn't hit any issues with the page not loading.

## required packages
library(RSelenium)
library(rvest)
library(magrittr)
library(dplyr)


## start RSelenium
checkForServer()
startServer()
remDr <- remoteDriver()
remDr$open()

## send Selenium to the page
remDr$navigate("https://www.heritageunits.com/Locomotive/Detail/NS8098")

## get the page html
page_source <- remDr$getPageSource()

## parse it and extract the table, convert to data.frame
read_html(page_source[[1]]) %>% html_nodes("table") %>% html_table() %>% extract2(1)
like image 25
Jonathan Carroll Avatar answered Feb 05 '26 06:02

Jonathan Carroll



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!