Article ID Journal Published Year Pages File Type
465488 Computer Law & Security Review 2013 10 Pages PDF
Abstract

The robots.txt protocol allows website owners to specify whether and if so, what bots may access their sites. On the one hand, websites owners may have good reason to fend off bots. The bots may consume too much capacity, they may harvest data that are not suitable for presentation elsewhere on the web, or the owner may have reasons for disallowing bots that lie in the relation with user of the bot. On the other hand, search engines, aggregators and other users of bots may provide social beneficial services based on the data collected by bots, i.e. data that are freely available to anybody visiting the site manually. How should the law regulate disputes that arise in this context? Two legal regimes (trespass to chattels and unauthorised access) have been examined. Based on the characteristics of the disputes at hand, a number of desirable characteristics for an ideal form of regulation are identified. When testing the two regimes they are found to be lacking. A structure for a form of regulation is presented that allows the law to develop in a way that does more justice to the disputes at hand.

Keywords
Related Topics
Physical Sciences and Engineering Computer Science Computer Science (General)
Authors
,