I think an idea may be to just search a search engine for a single letter instead of a list of words and then choose a random url from a random page from the entire set of results.
That may avoid the issue of having the search engine assign a “relevancy” score as well being biased to sites with specific words.
Davids suggestion is not really random as you will only be able to find pages that have been bit.ly’d and this is probably mainly true for relatively new sites.
thrillsays:
Rather than picking common words I have done this by picking from one to three random words, and then using the “lucky” button (via a direct url call) button on Google. This *appears* to give many more unusual pages better visibility – and it’s humorous in wondering how certain random combinations result in redirection to some pages.
Anonymoussays:
If you make an assumption that what Google / Bing / anyother search engine indexes is representative of the web, then you can use MCMC sampling methods to produce a random sample of documents from a search engine. See http://portal.acm.org/citation.cfm?id=1411509.1411514
David Buttlersays:
I think that a random walk can be a bad idea, if implemented naively. If memory serves, the web has at a high level, three types of pages
1) pages with only out links
2) pages with only in links
3) pages with both
This concept can be extended to not just pages, but entire subgraphs. Once you get into some parts of the subgraph you can never get out, thus capturing your random walker. However, without a very large index, it would be difficult to identify these parts of the graph.
Another alternative is to buy a copy of the web. See ClueWeb09 for 1B web pages for less than $1K
Siamak Fsays:
The idea of using a Metropolis Hastings Algorithm came to my mind.
Say your webgraph is your Markov chain graph. And you want to sample randomly. one idea is to start with an initial seed and run your Metropolis Hastings algorithm with a burn in period and it will give you a state randomly and normally distributed around your seed.
Similar idea can be used for uniform distribution 🙂
As far as I know, it produces random pages taken from their index. I used it in the past and, although there certainly shows a bias towards .com pages, it is pretty convenient.
Best, Dani
Sebsays:
The Yahoo! generator is pretty cool. It seems to only show top-level pages, though.
Given that many web pages are actually generated through parameters (see http://www.epinions.com/Refrigerators for just one example), the number of different pages a site can offer rises with the number of parameter combinations. How does one define “uniform sampling” in that context?
I think an idea may be to just search a search engine for a single letter instead of a list of words and then choose a random url from a random page from the entire set of results.
That may avoid the issue of having the search engine assign a “relevancy” score as well being biased to sites with specific words.
Also this paper
http://www9.org/w9cdrom/88/88.html
which includes methods for balancing out the tendency for random walks to reach well-connected pages more often than poorly-connected ones,
Generate a random bit.ly url. It usually ends with 4 to 6 characters and numbers.
for instance, this page is: http://bit.ly/KSZ44
but it’s probably biased toward geek topics..
For facebook users it is quite easy to pick randomly:
1. pick a random number $RANDOMNUMBER
2. check if there is somebody at
http://www.facebook.com/home.php#/profile.php?id=$RANDOMNUMBER
3. if yes: done,
if not, goto 1
Davids suggestion is not really random as you will only be able to find pages that have been bit.ly’d and this is probably mainly true for relatively new sites.
Rather than picking common words I have done this by picking from one to three random words, and then using the “lucky” button (via a direct url call) button on Google. This *appears* to give many more unusual pages better visibility – and it’s humorous in wondering how certain random combinations result in redirection to some pages.
If you make an assumption that what Google / Bing / anyother search engine indexes is representative of the web, then you can use MCMC sampling methods to produce a random sample of documents from a search engine. See http://portal.acm.org/citation.cfm?id=1411509.1411514
I think that a random walk can be a bad idea, if implemented naively. If memory serves, the web has at a high level, three types of pages
1) pages with only out links
2) pages with only in links
3) pages with both
This concept can be extended to not just pages, but entire subgraphs. Once you get into some parts of the subgraph you can never get out, thus capturing your random walker. However, without a very large index, it would be difficult to identify these parts of the graph.
Another alternative is to buy a copy of the web. See ClueWeb09 for 1B web pages for less than $1K
The idea of using a Metropolis Hastings Algorithm came to my mind.
Say your webgraph is your Markov chain graph. And you want to sample randomly. one idea is to start with an initial seed and run your Metropolis Hastings algorithm with a burn in period and it will give you a state randomly and normally distributed around your seed.
Similar idea can be used for uniform distribution 🙂
Hi,
You should check this Yahoo! service: http://random.yahoo.com/bin/ryl
As far as I know, it produces random pages taken from their index. I used it in the past and, although there certainly shows a bias towards .com pages, it is pretty convenient.
Best, Dani
The Yahoo! generator is pretty cool. It seems to only show top-level pages, though.
Given that many web pages are actually generated through parameters (see http://www.epinions.com/Refrigerators for just one example), the number of different pages a site can offer rises with the number of parameter combinations. How does one define “uniform sampling” in that context?
A few good-looking hits stem from this query: http://www.google.com/search?hl=en&q=how+to+sample+uniform+from+the+web&aq=f&oq=&aqi=