Daniel Lemire's blog

, 9 min read

Picking a web page at random on the Web

12 thoughts on “Picking a web page at random on the Web”

  1. dan g says:

    I think an idea may be to just search a search engine for a single letter instead of a list of words and then choose a random url from a random page from the entire set of results.

    That may avoid the issue of having the search engine assign a “relevancy” score as well being biased to sites with specific words.

  2. Chris Brew says:

    Also this paper

    http://www9.org/w9cdrom/88/88.html

    which includes methods for balancing out the tendency for random walks to reach well-connected pages more often than poorly-connected ones,

  3. David says:

    Generate a random bit.ly url. It usually ends with 4 to 6 characters and numbers.

    for instance, this page is: http://bit.ly/KSZ44

    but it’s probably biased toward geek topics..

  4. Thomas Deselaers says:

    For facebook users it is quite easy to pick randomly:

    1. pick a random number $RANDOMNUMBER

    2. check if there is somebody at
    http://www.facebook.com/home.php#/profile.php?id=$RANDOMNUMBER

    3. if yes: done,
    if not, goto 1

  5. Thomas Deselaers says:

    Davids suggestion is not really random as you will only be able to find pages that have been bit.ly’d and this is probably mainly true for relatively new sites.

  6. thrill says:

    Rather than picking common words I have done this by picking from one to three random words, and then using the “lucky” button (via a direct url call) button on Google. This *appears* to give many more unusual pages better visibility – and it’s humorous in wondering how certain random combinations result in redirection to some pages.

  7. Anonymous says:

    If you make an assumption that what Google / Bing / anyother search engine indexes is representative of the web, then you can use MCMC sampling methods to produce a random sample of documents from a search engine. See http://portal.acm.org/citation.cfm?id=1411509.1411514

  8. David Buttler says:

    I think that a random walk can be a bad idea, if implemented naively. If memory serves, the web has at a high level, three types of pages
    1) pages with only out links
    2) pages with only in links
    3) pages with both

    This concept can be extended to not just pages, but entire subgraphs. Once you get into some parts of the subgraph you can never get out, thus capturing your random walker. However, without a very large index, it would be difficult to identify these parts of the graph.

    Another alternative is to buy a copy of the web. See ClueWeb09 for 1B web pages for less than $1K

  9. Siamak F says:

    The idea of using a Metropolis Hastings Algorithm came to my mind.

    Say your webgraph is your Markov chain graph. And you want to sample randomly. one idea is to start with an initial seed and run your Metropolis Hastings algorithm with a burn in period and it will give you a state randomly and normally distributed around your seed.

    Similar idea can be used for uniform distribution 🙂

  10. Daniel Gayo says:

    Hi,

    You should check this Yahoo! service: http://random.yahoo.com/bin/ryl

    As far as I know, it produces random pages taken from their index. I used it in the past and, although there certainly shows a bias towards .com pages, it is pretty convenient.

    Best, Dani

  11. Seb says:

    The Yahoo! generator is pretty cool. It seems to only show top-level pages, though.

    Given that many web pages are actually generated through parameters (see http://www.epinions.com/Refrigerators for just one example), the number of different pages a site can offer rises with the number of parameter combinations. How does one define “uniform sampling” in that context?

  12. Seb says: