How do I parse this javascript from Oddshark.com with BeautifulSoup?
Working on a little web scraping program to get some data and help me make some bets.
Ultimately, I want to parse the "Trends" section under each game of the current week on pages like this (https://www.oddsshark.com/nfl/arizona-kansas-city-odds-november-11-2018-971332)
My current algorithm:
- GET https://www.oddsshark.com/nfl/scores
- Parse the webpage for the little "vs" button which holds links to all the games
- Parse for the Trends
Here's how I started:
from bs4 import BeautifulSoup
import requests
url = "https://www.oddsshark.com/nfl/scores"
result = requests.get("https://www.oddsshark.com/nfl/scores")
print ("Status: ", result.status_code)
content = result.content
soup = BeautifulSoup(content, 'html.parser')
print (soup)
When I look at the output, I don't really see any of those links. Is it cause a lot of the site of javascript?
Any pointers on the code/algorithm appreciated!
python web-scraping beautifulsoup
add a comment |
Working on a little web scraping program to get some data and help me make some bets.
Ultimately, I want to parse the "Trends" section under each game of the current week on pages like this (https://www.oddsshark.com/nfl/arizona-kansas-city-odds-november-11-2018-971332)
My current algorithm:
- GET https://www.oddsshark.com/nfl/scores
- Parse the webpage for the little "vs" button which holds links to all the games
- Parse for the Trends
Here's how I started:
from bs4 import BeautifulSoup
import requests
url = "https://www.oddsshark.com/nfl/scores"
result = requests.get("https://www.oddsshark.com/nfl/scores")
print ("Status: ", result.status_code)
content = result.content
soup = BeautifulSoup(content, 'html.parser')
print (soup)
When I look at the output, I don't really see any of those links. Is it cause a lot of the site of javascript?
Any pointers on the code/algorithm appreciated!
python web-scraping beautifulsoup
add a comment |
Working on a little web scraping program to get some data and help me make some bets.
Ultimately, I want to parse the "Trends" section under each game of the current week on pages like this (https://www.oddsshark.com/nfl/arizona-kansas-city-odds-november-11-2018-971332)
My current algorithm:
- GET https://www.oddsshark.com/nfl/scores
- Parse the webpage for the little "vs" button which holds links to all the games
- Parse for the Trends
Here's how I started:
from bs4 import BeautifulSoup
import requests
url = "https://www.oddsshark.com/nfl/scores"
result = requests.get("https://www.oddsshark.com/nfl/scores")
print ("Status: ", result.status_code)
content = result.content
soup = BeautifulSoup(content, 'html.parser')
print (soup)
When I look at the output, I don't really see any of those links. Is it cause a lot of the site of javascript?
Any pointers on the code/algorithm appreciated!
python web-scraping beautifulsoup
Working on a little web scraping program to get some data and help me make some bets.
Ultimately, I want to parse the "Trends" section under each game of the current week on pages like this (https://www.oddsshark.com/nfl/arizona-kansas-city-odds-november-11-2018-971332)
My current algorithm:
- GET https://www.oddsshark.com/nfl/scores
- Parse the webpage for the little "vs" button which holds links to all the games
- Parse for the Trends
Here's how I started:
from bs4 import BeautifulSoup
import requests
url = "https://www.oddsshark.com/nfl/scores"
result = requests.get("https://www.oddsshark.com/nfl/scores")
print ("Status: ", result.status_code)
content = result.content
soup = BeautifulSoup(content, 'html.parser')
print (soup)
When I look at the output, I don't really see any of those links. Is it cause a lot of the site of javascript?
Any pointers on the code/algorithm appreciated!
python web-scraping beautifulsoup
python web-scraping beautifulsoup
asked Nov 10 at 14:22
user1224478
821413
821413
add a comment |
add a comment |
3 Answers
3
active
oldest
votes
You can use the internal API this sites uses to get all the links & iterate over these to get the trends info which is embedded in a script
tag with id:gc-data
:
import requests
import json
from bs4 import BeautifulSoup
r = requests.get(
'https://io.oddsshark.com/ticker/nfl',
headers = {
'referer': 'https://www.oddsshark.com/nfl/scores'
}
)
links = [
(
t["event_date"],
t["away_name"],
t["home_name"],
"https://www.oddsshark.com{}".format(t["matchup_link"])
)
for t in r.json()['matchups']
if t["type"] == "matchup"
]
for t in links:
print("{} - {} vs {} => {}".format(t[0],t[1],t[2],t[3]))
r = requests.get(t[3])
soup = BeautifulSoup(r.content, "lxml")
trends = [
json.loads(v.text)
for v in soup.findAll('script', {"type":"application/json", "id":"gc-data"})
]
print(trends[0]["oddsshark_gamecenter"]["trends"])
print("#########################################")
wow that's awesome. I started using Google's Puppeteer because i read it would be more appropriate for what I wanted to do. How did you know about the api oddshark uses? Is that what the io.oddsshark.com/ticker/nfl url is? Cause if i try to go to that webpage, it says unauthorized
– user1224478
Nov 10 at 21:39
@user1224478 if you check the network tab you will see that this url is called. It seems it needs the referer header to get through the unauthorized messagecurl "https://io.oddsshark.com/ticker/nfl" -H "referer: https://www.oddsshark.com/nfl/scores"
– Bertrand Martel
Nov 10 at 21:59
do you mind explaining what exactly I should be looking for? I am on Chrome looking at the Network tab but there's a lot of stuff going on. Should I see what you mentioned as I click on the link to the matchup page?
– user1224478
Nov 11 at 7:05
try filtering XHR in chrome dev console
– Bertrand Martel
Nov 11 at 13:21
add a comment |
The reason you don't see those links is that they're not in the response that requests
receives. This is very likely for one of two reasons:
- The server recognizes that you are trying to scrape the site with a script, and sends you different content. Usually this is because of the
User-Agent
set byrequests
. - The content is added dynamically via JavaScript that runs in the browser.
You could probably render this content using a headless browser in your python script and end up with the same content you see when you visit the site with Chrome et. Per (1) it might be necessary to experiment with the User-Agent
header in your request also.
add a comment |
The data is loaded via javascript to the trends table, but is actually included in a script
tag inside the html that you receive. You can parse it like this:
import requests
import json
from bs4 import BeautifulSoup
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:59.0) Gecko/20100101 Firefox/59.0'
}
response = requests.get('https://www.oddsshark.com/nfl/arizona-kansas-city-odds-november-11-2018-971332', headers=headers)
soup = BeautifulSoup(response.text, "lxml")
data = json.loads(soup.find("script", {'id': 'gc-data'}).text)
print(data['oddsshark_gamecenter']['trends'])
Outputs:
{'local': {'title': 'Trends'}, 'away': [{'value': 'Arizona is 4-1-1
ATS in its last 6 games '}, {'value': 'Arizona is 2-6 SU in its last 8
games '}, {'value': "The total has gone UNDER in 8 of Arizona's last
12 games "}, {'value': 'Arizona is 3-7-1 ATS in its last 11 games on
the road'}, {'value': 'Arizona is 2-4 SU in its last 6 games on the
road'}...
This script tag is found on the matchup page url itself correct? Not from the scores page
– user1224478
Nov 10 at 21:45
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53239879%2fhow-do-i-parse-this-javascript-from-oddshark-com-with-beautifulsoup%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
You can use the internal API this sites uses to get all the links & iterate over these to get the trends info which is embedded in a script
tag with id:gc-data
:
import requests
import json
from bs4 import BeautifulSoup
r = requests.get(
'https://io.oddsshark.com/ticker/nfl',
headers = {
'referer': 'https://www.oddsshark.com/nfl/scores'
}
)
links = [
(
t["event_date"],
t["away_name"],
t["home_name"],
"https://www.oddsshark.com{}".format(t["matchup_link"])
)
for t in r.json()['matchups']
if t["type"] == "matchup"
]
for t in links:
print("{} - {} vs {} => {}".format(t[0],t[1],t[2],t[3]))
r = requests.get(t[3])
soup = BeautifulSoup(r.content, "lxml")
trends = [
json.loads(v.text)
for v in soup.findAll('script', {"type":"application/json", "id":"gc-data"})
]
print(trends[0]["oddsshark_gamecenter"]["trends"])
print("#########################################")
wow that's awesome. I started using Google's Puppeteer because i read it would be more appropriate for what I wanted to do. How did you know about the api oddshark uses? Is that what the io.oddsshark.com/ticker/nfl url is? Cause if i try to go to that webpage, it says unauthorized
– user1224478
Nov 10 at 21:39
@user1224478 if you check the network tab you will see that this url is called. It seems it needs the referer header to get through the unauthorized messagecurl "https://io.oddsshark.com/ticker/nfl" -H "referer: https://www.oddsshark.com/nfl/scores"
– Bertrand Martel
Nov 10 at 21:59
do you mind explaining what exactly I should be looking for? I am on Chrome looking at the Network tab but there's a lot of stuff going on. Should I see what you mentioned as I click on the link to the matchup page?
– user1224478
Nov 11 at 7:05
try filtering XHR in chrome dev console
– Bertrand Martel
Nov 11 at 13:21
add a comment |
You can use the internal API this sites uses to get all the links & iterate over these to get the trends info which is embedded in a script
tag with id:gc-data
:
import requests
import json
from bs4 import BeautifulSoup
r = requests.get(
'https://io.oddsshark.com/ticker/nfl',
headers = {
'referer': 'https://www.oddsshark.com/nfl/scores'
}
)
links = [
(
t["event_date"],
t["away_name"],
t["home_name"],
"https://www.oddsshark.com{}".format(t["matchup_link"])
)
for t in r.json()['matchups']
if t["type"] == "matchup"
]
for t in links:
print("{} - {} vs {} => {}".format(t[0],t[1],t[2],t[3]))
r = requests.get(t[3])
soup = BeautifulSoup(r.content, "lxml")
trends = [
json.loads(v.text)
for v in soup.findAll('script', {"type":"application/json", "id":"gc-data"})
]
print(trends[0]["oddsshark_gamecenter"]["trends"])
print("#########################################")
wow that's awesome. I started using Google's Puppeteer because i read it would be more appropriate for what I wanted to do. How did you know about the api oddshark uses? Is that what the io.oddsshark.com/ticker/nfl url is? Cause if i try to go to that webpage, it says unauthorized
– user1224478
Nov 10 at 21:39
@user1224478 if you check the network tab you will see that this url is called. It seems it needs the referer header to get through the unauthorized messagecurl "https://io.oddsshark.com/ticker/nfl" -H "referer: https://www.oddsshark.com/nfl/scores"
– Bertrand Martel
Nov 10 at 21:59
do you mind explaining what exactly I should be looking for? I am on Chrome looking at the Network tab but there's a lot of stuff going on. Should I see what you mentioned as I click on the link to the matchup page?
– user1224478
Nov 11 at 7:05
try filtering XHR in chrome dev console
– Bertrand Martel
Nov 11 at 13:21
add a comment |
You can use the internal API this sites uses to get all the links & iterate over these to get the trends info which is embedded in a script
tag with id:gc-data
:
import requests
import json
from bs4 import BeautifulSoup
r = requests.get(
'https://io.oddsshark.com/ticker/nfl',
headers = {
'referer': 'https://www.oddsshark.com/nfl/scores'
}
)
links = [
(
t["event_date"],
t["away_name"],
t["home_name"],
"https://www.oddsshark.com{}".format(t["matchup_link"])
)
for t in r.json()['matchups']
if t["type"] == "matchup"
]
for t in links:
print("{} - {} vs {} => {}".format(t[0],t[1],t[2],t[3]))
r = requests.get(t[3])
soup = BeautifulSoup(r.content, "lxml")
trends = [
json.loads(v.text)
for v in soup.findAll('script', {"type":"application/json", "id":"gc-data"})
]
print(trends[0]["oddsshark_gamecenter"]["trends"])
print("#########################################")
You can use the internal API this sites uses to get all the links & iterate over these to get the trends info which is embedded in a script
tag with id:gc-data
:
import requests
import json
from bs4 import BeautifulSoup
r = requests.get(
'https://io.oddsshark.com/ticker/nfl',
headers = {
'referer': 'https://www.oddsshark.com/nfl/scores'
}
)
links = [
(
t["event_date"],
t["away_name"],
t["home_name"],
"https://www.oddsshark.com{}".format(t["matchup_link"])
)
for t in r.json()['matchups']
if t["type"] == "matchup"
]
for t in links:
print("{} - {} vs {} => {}".format(t[0],t[1],t[2],t[3]))
r = requests.get(t[3])
soup = BeautifulSoup(r.content, "lxml")
trends = [
json.loads(v.text)
for v in soup.findAll('script', {"type":"application/json", "id":"gc-data"})
]
print(trends[0]["oddsshark_gamecenter"]["trends"])
print("#########################################")
answered Nov 10 at 16:56
Bertrand Martel
16.8k134066
16.8k134066
wow that's awesome. I started using Google's Puppeteer because i read it would be more appropriate for what I wanted to do. How did you know about the api oddshark uses? Is that what the io.oddsshark.com/ticker/nfl url is? Cause if i try to go to that webpage, it says unauthorized
– user1224478
Nov 10 at 21:39
@user1224478 if you check the network tab you will see that this url is called. It seems it needs the referer header to get through the unauthorized messagecurl "https://io.oddsshark.com/ticker/nfl" -H "referer: https://www.oddsshark.com/nfl/scores"
– Bertrand Martel
Nov 10 at 21:59
do you mind explaining what exactly I should be looking for? I am on Chrome looking at the Network tab but there's a lot of stuff going on. Should I see what you mentioned as I click on the link to the matchup page?
– user1224478
Nov 11 at 7:05
try filtering XHR in chrome dev console
– Bertrand Martel
Nov 11 at 13:21
add a comment |
wow that's awesome. I started using Google's Puppeteer because i read it would be more appropriate for what I wanted to do. How did you know about the api oddshark uses? Is that what the io.oddsshark.com/ticker/nfl url is? Cause if i try to go to that webpage, it says unauthorized
– user1224478
Nov 10 at 21:39
@user1224478 if you check the network tab you will see that this url is called. It seems it needs the referer header to get through the unauthorized messagecurl "https://io.oddsshark.com/ticker/nfl" -H "referer: https://www.oddsshark.com/nfl/scores"
– Bertrand Martel
Nov 10 at 21:59
do you mind explaining what exactly I should be looking for? I am on Chrome looking at the Network tab but there's a lot of stuff going on. Should I see what you mentioned as I click on the link to the matchup page?
– user1224478
Nov 11 at 7:05
try filtering XHR in chrome dev console
– Bertrand Martel
Nov 11 at 13:21
wow that's awesome. I started using Google's Puppeteer because i read it would be more appropriate for what I wanted to do. How did you know about the api oddshark uses? Is that what the io.oddsshark.com/ticker/nfl url is? Cause if i try to go to that webpage, it says unauthorized
– user1224478
Nov 10 at 21:39
wow that's awesome. I started using Google's Puppeteer because i read it would be more appropriate for what I wanted to do. How did you know about the api oddshark uses? Is that what the io.oddsshark.com/ticker/nfl url is? Cause if i try to go to that webpage, it says unauthorized
– user1224478
Nov 10 at 21:39
@user1224478 if you check the network tab you will see that this url is called. It seems it needs the referer header to get through the unauthorized message
curl "https://io.oddsshark.com/ticker/nfl" -H "referer: https://www.oddsshark.com/nfl/scores"
– Bertrand Martel
Nov 10 at 21:59
@user1224478 if you check the network tab you will see that this url is called. It seems it needs the referer header to get through the unauthorized message
curl "https://io.oddsshark.com/ticker/nfl" -H "referer: https://www.oddsshark.com/nfl/scores"
– Bertrand Martel
Nov 10 at 21:59
do you mind explaining what exactly I should be looking for? I am on Chrome looking at the Network tab but there's a lot of stuff going on. Should I see what you mentioned as I click on the link to the matchup page?
– user1224478
Nov 11 at 7:05
do you mind explaining what exactly I should be looking for? I am on Chrome looking at the Network tab but there's a lot of stuff going on. Should I see what you mentioned as I click on the link to the matchup page?
– user1224478
Nov 11 at 7:05
try filtering XHR in chrome dev console
– Bertrand Martel
Nov 11 at 13:21
try filtering XHR in chrome dev console
– Bertrand Martel
Nov 11 at 13:21
add a comment |
The reason you don't see those links is that they're not in the response that requests
receives. This is very likely for one of two reasons:
- The server recognizes that you are trying to scrape the site with a script, and sends you different content. Usually this is because of the
User-Agent
set byrequests
. - The content is added dynamically via JavaScript that runs in the browser.
You could probably render this content using a headless browser in your python script and end up with the same content you see when you visit the site with Chrome et. Per (1) it might be necessary to experiment with the User-Agent
header in your request also.
add a comment |
The reason you don't see those links is that they're not in the response that requests
receives. This is very likely for one of two reasons:
- The server recognizes that you are trying to scrape the site with a script, and sends you different content. Usually this is because of the
User-Agent
set byrequests
. - The content is added dynamically via JavaScript that runs in the browser.
You could probably render this content using a headless browser in your python script and end up with the same content you see when you visit the site with Chrome et. Per (1) it might be necessary to experiment with the User-Agent
header in your request also.
add a comment |
The reason you don't see those links is that they're not in the response that requests
receives. This is very likely for one of two reasons:
- The server recognizes that you are trying to scrape the site with a script, and sends you different content. Usually this is because of the
User-Agent
set byrequests
. - The content is added dynamically via JavaScript that runs in the browser.
You could probably render this content using a headless browser in your python script and end up with the same content you see when you visit the site with Chrome et. Per (1) it might be necessary to experiment with the User-Agent
header in your request also.
The reason you don't see those links is that they're not in the response that requests
receives. This is very likely for one of two reasons:
- The server recognizes that you are trying to scrape the site with a script, and sends you different content. Usually this is because of the
User-Agent
set byrequests
. - The content is added dynamically via JavaScript that runs in the browser.
You could probably render this content using a headless browser in your python script and end up with the same content you see when you visit the site with Chrome et. Per (1) it might be necessary to experiment with the User-Agent
header in your request also.
answered Nov 10 at 14:39
Matt Morgan
2,2062820
2,2062820
add a comment |
add a comment |
The data is loaded via javascript to the trends table, but is actually included in a script
tag inside the html that you receive. You can parse it like this:
import requests
import json
from bs4 import BeautifulSoup
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:59.0) Gecko/20100101 Firefox/59.0'
}
response = requests.get('https://www.oddsshark.com/nfl/arizona-kansas-city-odds-november-11-2018-971332', headers=headers)
soup = BeautifulSoup(response.text, "lxml")
data = json.loads(soup.find("script", {'id': 'gc-data'}).text)
print(data['oddsshark_gamecenter']['trends'])
Outputs:
{'local': {'title': 'Trends'}, 'away': [{'value': 'Arizona is 4-1-1
ATS in its last 6 games '}, {'value': 'Arizona is 2-6 SU in its last 8
games '}, {'value': "The total has gone UNDER in 8 of Arizona's last
12 games "}, {'value': 'Arizona is 3-7-1 ATS in its last 11 games on
the road'}, {'value': 'Arizona is 2-4 SU in its last 6 games on the
road'}...
This script tag is found on the matchup page url itself correct? Not from the scores page
– user1224478
Nov 10 at 21:45
add a comment |
The data is loaded via javascript to the trends table, but is actually included in a script
tag inside the html that you receive. You can parse it like this:
import requests
import json
from bs4 import BeautifulSoup
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:59.0) Gecko/20100101 Firefox/59.0'
}
response = requests.get('https://www.oddsshark.com/nfl/arizona-kansas-city-odds-november-11-2018-971332', headers=headers)
soup = BeautifulSoup(response.text, "lxml")
data = json.loads(soup.find("script", {'id': 'gc-data'}).text)
print(data['oddsshark_gamecenter']['trends'])
Outputs:
{'local': {'title': 'Trends'}, 'away': [{'value': 'Arizona is 4-1-1
ATS in its last 6 games '}, {'value': 'Arizona is 2-6 SU in its last 8
games '}, {'value': "The total has gone UNDER in 8 of Arizona's last
12 games "}, {'value': 'Arizona is 3-7-1 ATS in its last 11 games on
the road'}, {'value': 'Arizona is 2-4 SU in its last 6 games on the
road'}...
This script tag is found on the matchup page url itself correct? Not from the scores page
– user1224478
Nov 10 at 21:45
add a comment |
The data is loaded via javascript to the trends table, but is actually included in a script
tag inside the html that you receive. You can parse it like this:
import requests
import json
from bs4 import BeautifulSoup
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:59.0) Gecko/20100101 Firefox/59.0'
}
response = requests.get('https://www.oddsshark.com/nfl/arizona-kansas-city-odds-november-11-2018-971332', headers=headers)
soup = BeautifulSoup(response.text, "lxml")
data = json.loads(soup.find("script", {'id': 'gc-data'}).text)
print(data['oddsshark_gamecenter']['trends'])
Outputs:
{'local': {'title': 'Trends'}, 'away': [{'value': 'Arizona is 4-1-1
ATS in its last 6 games '}, {'value': 'Arizona is 2-6 SU in its last 8
games '}, {'value': "The total has gone UNDER in 8 of Arizona's last
12 games "}, {'value': 'Arizona is 3-7-1 ATS in its last 11 games on
the road'}, {'value': 'Arizona is 2-4 SU in its last 6 games on the
road'}...
The data is loaded via javascript to the trends table, but is actually included in a script
tag inside the html that you receive. You can parse it like this:
import requests
import json
from bs4 import BeautifulSoup
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:59.0) Gecko/20100101 Firefox/59.0'
}
response = requests.get('https://www.oddsshark.com/nfl/arizona-kansas-city-odds-november-11-2018-971332', headers=headers)
soup = BeautifulSoup(response.text, "lxml")
data = json.loads(soup.find("script", {'id': 'gc-data'}).text)
print(data['oddsshark_gamecenter']['trends'])
Outputs:
{'local': {'title': 'Trends'}, 'away': [{'value': 'Arizona is 4-1-1
ATS in its last 6 games '}, {'value': 'Arizona is 2-6 SU in its last 8
games '}, {'value': "The total has gone UNDER in 8 of Arizona's last
12 games "}, {'value': 'Arizona is 3-7-1 ATS in its last 11 games on
the road'}, {'value': 'Arizona is 2-4 SU in its last 6 games on the
road'}...
answered Nov 10 at 16:42
drec4s
1,6062621
1,6062621
This script tag is found on the matchup page url itself correct? Not from the scores page
– user1224478
Nov 10 at 21:45
add a comment |
This script tag is found on the matchup page url itself correct? Not from the scores page
– user1224478
Nov 10 at 21:45
This script tag is found on the matchup page url itself correct? Not from the scores page
– user1224478
Nov 10 at 21:45
This script tag is found on the matchup page url itself correct? Not from the scores page
– user1224478
Nov 10 at 21:45
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53239879%2fhow-do-i-parse-this-javascript-from-oddshark-com-with-beautifulsoup%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown