Q&A: Cuil co-founder Tom Costello

We have a spider which goes out and visits all web sites to collect the data. And in general if you're gonna build a search engine you have to go out to each website and download each page.

There's a set of protocols, which we follow, which are called robot.txt, which tell you what pages not to download and also they tell you how often you are allowed to go back to the site. In general, the default, unless someone states otherwise, is that you only go back every 30 seconds.

Because we were indexing a lot more of the web, we're going to a lot of places where other spiders such as Yahoo or Google haven't gone before. And it can be very disconcerting if you're a webmaster and suddenly you see this automated thing coming back every thirty seconds. Because Google has never crawled you or Yahoo doesn't crawl you, people are very surprised and say why is this happening. Again, it's how search engines work. In order for them to be able to index you, we have to crawl you.

Especially before we were launched, people were asking, why we were crawling so many pages. Who could possibly want to crawl all of these pages if you're not a major search engine? One of the reasons we do actually get more complaints is that we actually have crawled more of the web than other people.

I think most of the complaints are resolved very quickly. A lot of the cases that people who don't quite understand how robots.txt work, or it's a case where there's some other issue which usually resolves very quickly.

There have been a lot of people that have suggested that their site has been brought down by your bot. Is that true?

In general, I don't know of any case where our bot has trashed somebody's site. What we do is we fetch web pages. In general, because we're crawling, we crawl every thirty seconds to get a new page. That's the standard delay.

If your site crashes, and you look at your logs, in the previous thirty seconds, we probably came to your site. So it's very easy to look at your logs and say oh the only person who was using the site at the time it crashed was our bot'. It doesn't mean our bot crashed your site, it means we happened to be crawling at the same time.

We don't have any magic ability to crash people's websites; all we do is fetch pages.

You said you're quite happy with the amount of traffic you have now. What sort of market share do you think you need to pass the tipping point, to take Cuil to the mainstream.

I think the first thing we're trying to do is get a better experience. Search has really been in stasis for the last at least five years. There's been one way of pres results, one way of interacting, and that's really prevented creativity. There's a huge set of things you can do with a search engine, ways of visualising things, ways of presenting things, ways of summarising, and we're really excited to get into that space and begin to do things differently.

Because I think that's the one thing that's a little bit sad about search, because it takes so much investment and so much work to build a search engine, you really haven't seen a lot of people come and add creatively and do things differently and that's what we're really excited about.

After the troubles at the launch, you still have a very small market share. What's it going to take to get people using Cuil?

I think search comes down to you have to do a good job for people. And right now, I think about one in eight times people go to Google, people don't find what they're looking for and they'll go off and try another search engine. And when people do come and try us, we have to do a good job. We have to find what they were looking for when other search engines couldn't.