There's been some sort of hype lately regarding the Web Bots and I'm really surprised there's almost no information about it available. The Web Bot project was developed in the late 1990s mainly to predict the stock market ups and downs (Web bot and stock markets). How do they do it? It's really simple in fact and it's a very nice implementation of what search engines do. Web bots simply crawl the web the same way Google crawls it at regular intervals to catch new and existing web sites and detect relevant keywords. Web Bots do it just a little differently: While search engines are just "indexing" the Web and creating a Web directory, Web bots are looking for trends with the keyword relations they find. It search for keywords "standing out" in a web document but also take in consideration the content preceding and following the keyword. This data is then analyzed by a linguistic tool to determine the meaning if there's any.
So does it work and are Web Bots reliable? well it's kind of hit and miss. Sometimes you'll see things predicted correctly and sometimes not. I think the project is a really good initiative and can lead to great things, but at the moment I'm more under the impression that they interpret the results in the same way we interpret Nostradamus Quatrains. What I mean is that what the Web Bots are getting out of the internet while crawling is not that clear and when an event happen a couple of days later, then we find a way to relate the data.
Being in the computer engineering domain, I think I can see where the Web Bots will succeed and where it will fail. There are fields I believe the Web Bots can predict stuff and there are field they can't. What are these fields? Well, essentially, anything "man-made" could be predicted in some way and anything man has no control over can't be predicted. This is for the plain and simple reason that the Web Bots crawl the Internet for data and the Internet is actually man-made. So, the only data that can be collected is data written by people/government/companies,etc. I don't see how you can predict a natural disaster or anything like that by simply crawling the Web. The only thing you can get by crawling is facts or opinions, nothing else. The only way I can see predicting natural disasters or anything not man related is if the Web Bots actually crawls 3 000 blogs/websites written by specialists of a certain domain and that they are pointing towards a similar conclusion.
What about 2012 (Read my article on 2012) and the Web Bots? As I said, I'm not seeing how a computer can figure out what's going to happen in 2012 simply by visiting websites published by real people. The more data Web Bots get pointing towards 2012 just means more and more people are publishing stuff about 2012 and the end of the world. Remember, the only thing they can crawl is the internet and what you find on the internet was created by real persons, not God. They will surely get a strong correlation between 2012 and the end of the world; there's ton of websites talking about it.
So, can Web Bots predict stuff? Yes I believe it can and it's a really nice piece of technology. I'm pretty sure preventing terrorists attacks can be done using Web Bots and also predicting anything involving human interaction. Remember, it crawls the web, written by humans so it can only predict what humans are able to predict: Just in a shorter time frame! I think it's easy to fall into the "It can predict everything" stuff, so let's say it's a really nice technology which can help us predict things we wouldn't normally see happening. There's a lot of future in that field and it has to be pushed a lot more forward. A good project, with good potential, but be careful with what people say it predicted!
No comments:
Post a Comment