Home / Travel / Deep Web And Travel Industry: Exclusive Interview With Marcus P. Zillman

Deep Web And Travel Industry: Exclusive Interview With Marcus P. Zillman

Today we are interviewing Marcus P. Zillman, International Internet expert, author, keynote speaker, corporate consultant and one of the most renowned expert on Deep Web, information retrieval and Internet.

Marcus is currently the Executive Director of the Virtual Private Library. He is a member of the American Society for Information Science and Technology, and is also actively involved as an internationally known speaker and author.

Marcus will talk about Deep Web, search engines’ future and their relationship with travel industry.

Deep Web And Travel Industry: Exclusive Interview With Marcus P. Zillman
Deep Web And Travel Industry: Exclusive Interview With Marcus P. Zillman

To start our interview…: what is your definition of Deep Web?

The simple definition of the Deep Web is the area of the World Wide Web that is not searched consistently and therefore not readily available to the searcher and researcher. It may be files, databases, .ps documents et. al. that the major search engines are not programmed or have algorithms to search.

Is it possible to estimate the real width of the Web? What’s the ratio between Invisible and Visible Web?

I have seen numerous studies and discussions on this that state the Deep Web represents 500 billion pages to 1 trillion pages. I would lean to the higher side as content is being developed at a tremendous rate with blogs, wikis, etc. The traditional web searched by Google and others normally searches in the area of 20 to 30 billion pages depending on you speak to. Now you can see a very large ratio between traditional web and the Deep Web… on the high side 30 billion versus 1 trillion.

The travel industry is extremely affected by consumer generated media. In your opinion, how relevant is to understand and monitor the Deep Web for a travel operator?

It is extremely important for the travel operator to not only understand the Deep Web but know how to work within the Deep Web to discover new knowledge as well as to gain competitive knowledge and learn what the consumer is thinking.

I was a guest speaker at a national travel operator conference in 1995 and told them that the Internet is coming … the Internet is coming …. and soon consumers would be purchasing tickets direct online!! Needless to say I was not well received but of course I was invited back in two years to show them what they should be doing …. interesting times then and now!

Do you think that search engines accuracy will improve in the next future and that the gap between indexed files and invisible ones will narrow?

Very good question! I would hopefully say yes but in the real virtual world it is more likely to maintain to the same ratio as new types of files and knowledgebases are constantly being developed not to mention all the video and multimedia files types currently being added to the Internet that would fall into the Deep Web.

Do you think that the new Universal Search algorithm will significantly improve Google’s performances in terms of its capacity of investigating the deep web?

As the founder of BotSpot and long time bot builder the ultimate goal was and will always be a Universal Search Algorithm! If this algorithm is created and all the files on the Internet are receptive and designed to accommodate the Universal Search Bot then it will allow improved Google’s performance in the capacity of investigating the Deep Web… I think we need to be concentrating on teaching folks how to search and how to write search queries first 😉 … many times search results are based on Garbage In, Garbage Out…

Do you think that a really semantic search engine as powerful as Google will ever appear on the market?

Google has gained significant market share since its inception in 1996 at Stanford. I have seen and continue to investigate a number of semantic search engines as this area is of high interest to me. In fact I have created a special section on semantic web resources in Deep Web Research Subject Tracer Information Blog showing how this area is growing and will be part of the future of searching. Other areas include RDF that allow for significant better tagging to help better identify the content.

One of your most successful keynote is “Searching the Internet in 2007 and beyond”: how do you see the information gathering process to evolve in the next future? Will the search engines remain the main doors for information retrieval or will they be replaced by other tools (RSS, e-mail alerting..)?

Searching in the future will be a truly exciting adventure. We will be seeing far more exacting answers with less final scrolling! We will be obtaining information that will be close to 100% relevancy and allowing us to then compare resulting finds to obtain what we truly desire. In my just released publication Current Awareness Monitors, Alerts and Information Traps for 2008 I show all the monitors, alerts and information traps that are available now on the web. The future will even have more of these to keep folks current in their business, profession or special interest! Every travel operator needs to be monitoring for their business name, personal name and competitors name not to mention other special interests area… this should be done now and be part of your on going market intelligence program!

How will users manage the information overload of the Web?

The key to managing information overload on the Web is EDUCATION! I have listed many resources to help overcome the information overload on the Web and they are freely available from my Manage Information Overload presentation resources page.

What are your three top advices for travel operators in order to monitor the online reputation?

You must understand the Deep Web, you must set up bots to monitor your name and competition and you must have current awareness information traps set to bring you the latest information in your field or niched area of your interest/profession!

In this post, “Google Aquires Internet”, the author imagines a possible future (2017) for Google, Internet, Microsoft and the others search engines: do you think it could be an insightful article or a joke?

Future articles are always insightful but if one knows and understands the original creation of the Internet (1969) and how the Internet Engineering Task Forces were set up… then one would understand that not one person, not one company, not one country will ever be able to acquire or even control the Internet! We truly live in exciting times where the world has truly become a global economy and the Internet has ignited a global collaboration capacity that has never been seen before in all of mankind!

 

Check Also

Asia, The Best Travel Destination

Asia, The Best Travel Destination

If the time has come for your vacation or a holiday getaway, read this article …

Leave a Reply

Your email address will not be published. Required fields are marked *