Het binnenste buiten Liber amicorum ter gelegenheid van het emeritaat
Het binnenste buiten
Liber amicorum ter gelegenheid van het emiritaat
van prof. dr. Aernout H.J. Schmidt, hoogleraar
Recht en Informatica te Leiden
© 2010, eLaw@Leiden en de auteurs ISBN-978 90-815196-1-8
Behoudens de in of krachtens de Auteurswet van 1912 gestelde uitzonderingen mag niets uit deze uitgave worden verveelvoudigd, opgeslagen in een geautomatiseerd gegevensbestand, of openbaar gemaakt, in enige vorm of op enige wijze, hetzij elektronisch, mechanisch, door fotokopieën, opna-men of enige andere manier, zonder voorafgaande schriftelijke toestemming van de auteurs.
Voorzover het maken van reprografische verveelvoudigingen uit deze uitgave is toegestaan op grond van artikel 16h Auteurswet 1912 dient men de daarvoor wettelijk verschuldigde vergoedingen te voldoen aan de Stichting Reprorecht (Postbus 3060, 2130 KB Hoofddorp, www.reprorecht.nl). Voor het overnemen van (een) gedeelte(n) uit deze uitgave in bloemlezingen, readers en andere compila-tiewerken (art. 16 Auteurswet 1912) kan men zich wenden tot de Stichting PRO (Stichting Publi-catie- en Reproductierechten Organisatie, Postbus 3060, 2130 KB Hoofddorp).
Deel I – Grensgevallen 13
Paul Cliteur,Why I had not read Karen Armstrong (but I have now) 15
Wilfred Dolfsma,Government Failure – 4 types 27
Richard Gill, Lies, damned lies, and legal truths 39
Radim Polčák,Quem ad Finem: To the Limits of Modernity 51
Krzysztof Siewicz, Free Software and the Law 65
Deel II – Informatierecht 77
Marga Groothuis, Informatievrijheid en digitalisering 79
Wouter Hins, Publieke media op internet: zorgplicht en
Cyril van der Net, De nabuurrechtelijke aanspraak op een billijke
vergoeding voor privé-kopiëren naar internationaal recht 109
Leonie Siemerink, File sharing bedreigt handhaving auteursrecht 123
Deel III – Methode 135
Annemarie Beunen, De geesteswetenschappen, rechtsgeleerdheid
en kunstgeschiedenis vergeleken 137
Rob van Esch, Schijn verdwijnt waar ICT verschijnt 153
Jaap Hage, Heeft ICT-recht een eigen methode? 163
Jaap van den Herik, Van registratie naar verwerking 179
Gerben Wierda, Limits and Dimensions 189
Deel IV – Praktijk 203
Martin Apistola, Towards a Preliminary Knowledge Management
Reasoning System to Improve Consistency of Sentencing 205
Hans Fokker, E-discovery 221
Ronald van den Hoogen, E-Justice: nieuwe kansen voor onderzoek
naar ICT en recht 233
Wim Voermans, Computers kunnen er niks van! 243
Deel V – Privacy 253
John Borking, Assessing investments mitigating privacy risks 255
Mireille Hildebrandt, Recht en markt: met falen en opstaan 275
Corien Prins, Digital Diversity: Protecting Identities Instead of
Individual Data 291
Bart Schermer, Privacy and Singularity: little ground for optimism? 305
Gerrit-Jan Zwenne, Over persoonsgegevens en IP-adressen,
Deel VI – Transparantie 343
Dariusz Adamski, Change We Can Believe In or Politics As Usual? 345
Hans Franken, Is het elektronisch patiëntendossier een bedreiging
voor de rechtsstaat? 367
Laurens Mommers, Toegang tot juridische informatie als grondrecht 375
Ignace Snellen, Towards transparency as a basic human right 389
Kees Stuurman, Public access to standards: some fundamental issues
and recent developments 405
Dit vriendenboek bevat bijdragen van een groot aantal personen met wie Aernout Schmidt in de loop van zijn carrière aan de Universiteit Leiden heeft samengewerkt. Daaronder is ook een flink aantal doctores die hun bul mede kregen uitgereikt dankzij de begeleiding van Aernout. Omdat Aernout nooit voor één gat te vangen is geweest, komt de diversiteit van zijn interesses én van zijn vriendenkring tot uiting in de reikwijdte van de bijdragen.
Bij het classificeren van de bijdragen kwamen we niet verder dan een Wittgensteiniaanse familiegelijkenis: die van het belang het binnenste te ken-nen en naar buiten te brengen. Het binken-nenste van de techniek, door Aernout immer met gezond wantrouwen en grote nieuwsgierigheid benaderd. Maar ook het binnenste van de mens, door hem niet altijd begrepen, maar door-gaans ‘at face value’ geaccepteerd.
Aernout heeft verschillende ‘helden’ gekend, althans dat veronderstel-len wij aan de hand van de boeken die hij ooit las, citeerde en bediscussieer-de. Van Hofstadter en Geertz tot Fuller en Lessig, hij stopt nooit bij de gren-zen van het recht, vermoedelijk omdat die niet bestaan. Zijn missie lijkt te hebben bestaan uit het ‘injecteren’ van belangrijke wetenschappelijke denk-beelden in een discipline waar vaak dranghekken worden geplaatst om het positieve recht.
Ver voordat in brede kring het vooruitgangsgeloof in kunstmatige intel-ligentie gematigd werd, verliet Aernout het pad van de ‘harde’ rechtsinfor-matica om het in te ruilen voor datgene waarin de potentiële vooruitgang enorm zou zijn: toepassing van ICT in de juridische beroepen. En toen het adagium ‘Wat off-line geldt dient ook on-line te gelden’ in brede kring nog voor zoete koek werd geslikt, stond Aernout daar al buitengewoon kritisch tegenover.
Het zal de lezer dan ook niet verbazen dat de bijdragen aan dit boek zich niet licht laten vangen in geheide thema’s. Des te toepasselijker is dan ook het feit dat Aernout hartelijk (om niet te zeggen: bulderend) kan lachen om een indeling van dieren uit een boek van de schrijver Borges: dieren die eigendom zijn van de keizer, zwerfhonden, dieren die in deze indeling voor-komen, die juist een kruik gebroken hebben, en die uit de verte op vliegen lijken.1
Het lichte onbehagen dat deze absurdistische typologie oproept, nam ook van ons bezit bij pogingen om de bijdragen aan het boek in thema’s in te delen. We willen daarom graag benadrukken dat die thema’s eigenlijk voor-namelijk dienen om de suggestie te wekken dat wij greep hadden op de inhoud van het boek. Anders dan nieuwe media bieden boeken niet echt de mogelijkheid om één bijdrage in meerdere thema’s in te delen, iets wat juist in dit geval van bijzonder nut was geweest.
Stoort u zich dan ook vooral niet aan de thema’s grensgevallen, informa-tierecht, methode, praktijk, privacy, transparantie, want zij betekenen in het geheel niets. Zij zijn vooral een manier om – zoals het hoort – dit boek netjes in stukken op te delen.
In het deel Grensgevallen zijn bijdragen te vinden over de werken van Karen Armstrong (Cliteur, Why I had not read Karen Armstrong (but I have now)), overheidsfalen als variant op marktfalen (Dolfsma, Government Fai-lure – 4 types), rechters en (gebrek aan) kennis van statistiek (Gill, Lies, dam-ned lies, and legal truths), het intellectuele modernisme in het Wenen van begin 20e eeuw (Polčák, Quem ad Finem: To the Limits of Modernity), en
wetgeving concipiëren conform het model van open-source software (Sie-wicz, Free Software and the Law; on collaborative law-making and adjudica-tion).
In het deel Informatierecht zijn bijdragen te vinden over de actuele stand van de grondrechtencatalogus in Nederland op het gebied van o.m. uitings-vrijheid (Groothuis, Informatieuitings-vrijheid en digitalisering. Ontwikkelingen in jurisprudentie en regelgeving), de verhouding tussen publieke en private media en hun diensten op internet (Hins, Publieke media op internet: zorg-plicht en concurrentievervalsing), de vergoeding voor privé-kopiëren in internationaalrechtelijk kader (Van der Net, De nabuurrechtelijke aanspraak op een billijke vergoeding voor privé-kopiëren naar internationaal recht), en de moeizame verhouding tussen file sharing en auteursrecht (Siemerink, File sharing bedreigt handhaving auteursrecht).
Het deel Methode bevat een aantal bijdragen over de methode van de (ICT-)rechtsgeleerdheid. Het gaat om een vergelijking van de methode van de rechtsgeleerdheid met enkele andere disciplines (Beunen, De geestes-wetenschappen, rechtsgeleerdheid en kunstgeschiedenis vergeleken), de rol van ICT bij het vermijden van schijn (Van Esch, Schijn verdwijnt waar ICT verschijnt), een analyse van de (vermeende) ICT-juridische methode (Hage, Heeft ICT-recht een eigen methode?), informaticamethoden voor informatie-uitwisseling en –ranking (Jaap van den Herik, Van registratie naar verwer-king) en de vergelijking tussen economische en rechtswetenschap (Wierda, Limits and Dimensions).
kansen voor onderzoek naar ICT en recht), en de rol van computers in het werk van wetgevingsjuristen (Voermans, Computers kunnen er niks van!).
Het deel Privacy bevat bijdragen op het gebied van onder meer de eco-nomische waarde van persoonsgegevens. De bijdragen betreffen de bereke-ning van rendementen op privacybeschermende maatregelen (Borking, Assessing investments mitigating privacy risks), de commodificatie van pri-vacy (Hildebrandt, Recht en markt: met falen en opstaan), de verschuiving van gegevens naar identiteiten als te beschermen object (Prins, Digital Diver-sity: Protecting Identities Instead of Individual Data), privacybescherming op weg naar singulariteit (Schermer, Privacy and Singularity: little ground for optimism?) en verbetering van de regulering van verwerking van IP-adressen (Zwenne, Over persoonsgegevens en IP-IP-adressen, en de toekomst van privacywetgeving).
Het deel Transparantie, ten slotte, bevat bijdragen op het gebied van rechten op de transparantie van het openbaar bestuur en de toegang tot het recht. De bijdragen gaan over ICT-initiatieven op het gebied van transparan-tie van het Amerikaanse openbaar bestuur (Adamski, Change We Can Belie-ve In or Politics As Usual?), de vraag hoe het landelijk elektronisch patiën-tendossier past in het licht van Aernout’s oratie (Franken, Is het elektronisch patiëntendossier een bedreiging voor de rechtsstaat?), de vraag of toegang tot het recht als grondrecht kan worden geconstrueerd (Mommers, Toegang tot juridische informatie als grondrecht), transparantie als grondrecht (Snel-len, Towards transparency as a basic human right) en toegankelijkheid van standaarden waarnaar in regelgeving wordt verwezen (Stuurman, Public access to standards: some fundamental issues and recent developments).
Wij wensen in de allereerste plaats de kersverse emeritus veel leesplezier met de artikelen in dit liber amicorum. Dat wensen wij uiteraard ook de andere lezers. En op deze plaats willen wij graag onze dank uitspreken aan de auteurs voor hun geslaagde pogingen het binnenste naar buiten te halen. De redactie was in handen van een aantal collega’s van Aernout bij eLaw@ Leiden, Centrum voor Recht in de Informatiemaatschappij.
Aernout Schmidt is what I consider to be a ‘real professor’ and a great aca-demic scholar. The greater part of my acaaca-demic career at the law faculty of the University of Leiden I worked together with Aernout on the same research project (Social cohesion, multiculturalism and globalization) and in the same department (now: the Institute for the Interdisciplinary Study of the Law). Aernout always struck me as an impressive scholar for two reasons.
First, Aernout was never afraid to start reading very difficult books, apparently without feeling intimidated. I recall Aernout walking around with Douglas Hofstadter’s Gödel, Escher and Bach (1979) as something he would devour during lunch break.1 A second reason is that Aernout was not
only eager to read difficult books but he was also willing to discuss the con-tent of those books during discussion sessions with other members of the staff and the most gifted students. More than once he also tried to organize those sessions.
I always had a vague excuse not to attend to those sessions, also because the books Aernout selected were too difficult for me or too time-consuming to read. I remember an exhortation by Aernout to read and subsequently discuss Phillip Bobbitt’s The Shield of Achilles: War, Peace and the Course of His-tory (2002). I bought the book (I have all the books Aernout selected on my bookshelves), noticed that this was more than 900 pages long and did not read it.2 Another book I bought and subsequently did not read on Aernout’s
advice was Lawrence Lessig’s Code: and some other laws of cyberspace (1999). (This time there was not a reasonable excuse; the book is 250 pages long).
As far as I remember there was one book we actually did read together with a small and rapidly dwindling group: the English translation of Ferdi-nand Tönnies’s Gemeinschaft und Gesellschaft (1887). All things considered this is not a very impressive record in the light of Aernout’s expectations of his direct scholarly ambiance.
Yet on the threshold I hope to rehabilitate myself a bit. I will try to do this by making some remarks on a book that I did not read either at the time this was suggested by Aernout, but which I have read afterwards. That book is Karen Armstrong’s The Battle for God: Fundamentalism in Judaism, Christianity
■ Paul Cliteur is full professor at the Leiden Law Faculty.
1 Hofstadter 1979. 2 Bobbitt 2002.
and Islam (2000).3 This time my reluctance to read the book was not inspired
by ordinary laziness but a kind of ideological resistance. I ‘knew’ (without having read the book) that this book would be a bad book. Now that I have read the book I know why. Here are my reasons.
All religions equally beautiful
Karen Armstrong (1944-) is one of the most widely read authors on religion nowadays. She rose to prominence 1993 with A History of God, a study in comparative religion and the key figures that have shaped the different reli-gions.4 Armstrong’s central thesis is that all religions are equally beautiful
and that it is only fundamentalist approaches (equally dispersed over all religions) that are responsible for the violence connected with religion.5
Armstrong is also convinced that all religions basically teach the same.6 They
all subscribe to the Golden Rule: do unto others what you would have others do unto you.
This view of religion is all very well, of course, as a profession of faith, but this approach leaves much to be desired as a scholarly orientation because it cannot explain violence. Are there really no moorings for violence in the religious traditions themselves? Is violence something that comes from outside the traditions and is it completely alien to those traditions them-selves? Why does it occur more often in some traditions than in others? Why more under some historical conditions than in others?
Armstrong’s books affirm people in their most sacred beliefs but are, from a scholarly point of view, more problematic. In particular after the Sep-tember 11, 2001 attacks she was much in demand on the lecture circuit where she pleas for inter-faith dialogue. The contradictions between religion and modern life are explained by focusing on the misguided interpretations that fundamentalists give to their religious tradition. This attitude is clearly man-ifested in her book on the interpretation of the Bible: The Bible (2007).
Her starting point in The Bible is sacred books, Holy Writ or Scripture. In nearly all the major faiths, she says, people have regarded certain texts as sacred and ontologically different from other documents. “They have invest-ed those writings with the weight of their highest aspirations, most extrava-gant hopes and deepest fears, and mysteriously the texts have given them something in return.”7
Yet Armstrong has also noticed that scriptural authority has acquired a dubious flavour. “Today scripture has a bad name”, she writes. “Terrorists
3 Armstrong 2000. 4 Armstrong 1993. 5 Armstrong 2000.
6 See on this: Antes 2007, who contends that the thesis that all religions say the same cannot be maintained empirically.
Why I had not read Karen Armstrong (but I have now)
use the Qur’an to justify atrocities, and some argue that violence of their scripture makes Muslims chronically aggressive. Christians campaign against the teaching of evolutionary theory because it contradicts the bibli-cal creation story. Jews argue that because God promised Canaan (modern Israel) to the descendants of Abraham, oppressive policies against the Pales-tinians are legitimate.”8 There has been a scriptural revival that has intruded
into public life, Armstrong concludes. And especially against those political manifestations of religion is secularist criticism directed. “Secularist oppo-nents of religion claim that scripture breeds violence, sectarianism and intol-erance; that it prevents people from thinking for themselves, and encourages delusion. If religion preaches compassion, why is there so much hatred in sacred texts?”9
That is a good question indeed. And it makes readers undoubtedly curi-ous about her answer. Alas, that answer is not very satisfying. Basically, her answer boils down to the contention that it is all a matter of interpretation. What she tries to do in her book on the Bible is to make two claims. First, she makes an analytical point. This is that allegorical reading of scripture is per-fectly legitimate. Second, there is a historical claim. Armstrong argues that an exclusively literal interpretation of the Bible is a “recent phenomenon”. Armstrong thinks, for instance, that until the 19th century very few people
imagined that the first chapter of Genesis was a factual account of the origins of life. For centuries, Jews and Christians relished “highly allegorical and inventive exegesis, insisting that a wholly literal reading of the Bible was neither possible nor desirable.”10
I think both claims are dubious but for different reasons. As a historical argument it would be wrong to claim that literalist interpretations were com-pletely unknown in the past and literalism is an invention of modern funda-mentalists.11 The tendency to take Scripture as literally true was much more
widely dispersed in earlier times than in our time. Historical research can point this out. But that is not the point I want to focus on. I want to concen-trate on the analytical claim. What is more important is that Armstrong also seems to be an adherent of semantic relativism. And how she reconciles this with her claim of the sacredness of the texts remains unclear. All the same, the reason why she believes this is crystal clear: we do not have to take the
9 Ibid., p. 3. 10 Ibid., p. 3.
problematic passages in Scripture seriously, because we can simply focus on what we like. “Interpretation” always provides the way out. What Arm-strong wants to demonstrate in her book is that from the first, biblical authors felt free to “revise texts they had inherited and give them entirely different meaning”.12 Later exegetes held up the Bible as the template for the
prob-lems of their time. This somewhat relaxed way of dealing with Scripture seems very attractive to Armstrong. About the attitude of those “exegetes” with regard to Scripture she writes:
Sometimes they allowed it to shape their world-view but they also felt free to change it and make it speak to contemporary conditions. They were not usually interested in discovering the original meaning of a biblical pas-sage. The Bible “proved” that it was holy because people continually discov-ered fresh ways to interpret it and found that this difficult, ancient set of documents cast light on situations that their authors could never have imag-ined. Revelation was an ongoing process; it has not been confined to a dis-tant theophany on Mount Sinai; exegetes continued to make the Word of God audible in each generation.13
This is a revealing passage that sums it all up. But there are some perti-nent questions to be posed with regard to this approach. Let us look more closely at the passage quoted before.
Those exegetes that Armstrong admires for their relaxed attitude towards Scripture “sometimes” allowed it to shape their world-view but “they also felt free to change it” and “make it speak to contemporary conditions”.
An obvious question then is: “In what situations did those exegetes choose to let their world-view be changed by Scripture and in what situa-tions did they resist and decide vice versa to change Scripture?” The dilemma seems to be this: either we are changed by Scripture or Scripture is changed by us. The traditional approach seems to be that we have to be changed by Scripture, but according to Armstrong and the liberal exegetes from the past she admires so much, it is also perfectly legitimate to change Scripture. But how do those “exegetes from the past” decide one way or the other? Arm-strong does not give us an answer to that pertinent question, at least not explicitly.
A second question that can be posed with regard to the passage quoted from Armstrong’s book is: what exactly were those liberal exegetes doing? One thing is sure, according to Armstrong “they were not usually interested in discovering the original meaning of a biblical passage”. But if that was not the focus of their interest, what was exactly? From the traditional perspec-tive is it clear what the meaning of Scripture is. Scripture informs us about the divine. It tells us what a personal, omnipotent and perfectly good God wants us to do. In Scripture God “reveals” his will (“Thy will be done, as in heaven, so on earth.” Matt. 6:10).
Why I had not read Karen Armstrong (but I have now)
A respectful intercourse with the divine would suppose, so it seems, that the exegete scrupulously tries to ascertain what the meaning of Scripture is, in order to know what God has in mind for mankind. But apparently the exegetes that Armstrong favours are not interested in this type of question. But what is their focus of interest?
Is Scripture “sacred”?
Armstrong also introduces a completely new, and we may say revolutionary, way to look at the “sacredness” of Scripture. In the traditional view Scripture is sacred because it informs us about the eternal plans God has for the world. “In its specifically theological usage, the term (Scripture) serves to identify the written and authoritative word of God rather than any specific textual content.”14 Scripture presents us with moral values and rules that have
eter-nal significance. The Bible was held in high esteem, because there we were provided with the moral injunctions that are not the whims of fashion, the human, all too human, ideas about right and wrong, but absolute and uni-versal guidelines. The whole idea of texts that are sacred and, in the words of Armstrong, “ontologically different”, is based on this presumption.
But what does Armstrong do? She tells us that the Bible “proved” that it was holy because people could continually discover “fresh ways to interpret it”. So what makes the Bible holy is not its fixed meaning, but exactly the opposite: it has no meaning at all because meaning constantly changes over time. It is merely logical that from here she continues with another discon-certing vision, viz. that in this conception of scripture “revelation was an ongoing process”. So what has been revealed today as higher wisdom can be obsolete tomorrow. The Ten Commandments revealed on Mount Sinai and one of the most authoritative parts of the Bible, because written or at least dictated by God himself,15 can in the future be abolished by new insights
and this should not convince us of the relative character of the text of the Bible but of its sacredness. The sacredness of the text is exactly that every-thing is possible. Or, to put it somewhat more instrumentally, Scripture is sacred because it can be used by the exegete for all he or she wishes.
Armstrong is very supportive of the exegesis by the rabbis called mid-rash. This is derived from the verb darash meaning “to investigate” or “to seek”.16 The meaning of a text is not “self-evident”, so Armstrong tells us.
“The exegete had to go in search of it, because every time a Jew confronted the Word of God in scripture, it signified something different. Scripture was inexhaustible.”17
These are revealing words. That the meaning of a text is not self-evident is not very surprising, to be sure, but that every time a text is read there exists a new meaning is nothing short of magic. This would make – if it were true – all human communication impossible.
Also the pretension that Scripture is “inexhaustible” is a bit strange. How does Armstrong know? History is still not at an end, so theoretically there may be a point in time where Scripture is really “exhausted”. Or is the “inexhaustibility of Scripture” all that the religion of Armstrong and the rab-bis she quotes amounts to? I mean: they apparently do not believe in an eter-nal God who reveals his ideas to mankind but in a kind of magic phenom-enon called “Scripture” that makes it possible for them to “get revealed” to them exactly what they want to hear each time this “Scripture” is consulted by them. Is the essence of their faith perhaps a kind of fetishism: they believe that one specific book that has in the past sent out new messages to different people all the time will continue to do so till the end of times? If that’s the case, fine, but people who believe such a thing would be the creators of a whole new religion, so it seems.
Ordinary books and sacred books
What Armstrong seems to do, is equate the Bible with what we call a classic work. This already appeared in her first book that gained a worldwide audi-ence: A History of God (1993). It is here that she writes about the founder of Islam and states: “It is not surprising that Muhammad found the revelations such an enormous strain: not only was he working through an entirely new political solution for his people, but he was composing one of the great spir-itual and literary classics of all time.”18 As one commentator has rightly
remarked: “This is high praise for a fellow composer perhaps, but nonethe-less reductionist in its familiar repetition of the old cornerstone of Jewish and Christian excising God from the authorship of the Holy Qur’an.”19
Phi-losopher John Haldane (1954-) refers to some modern liberal theologians and adds that if the first Christians had taken their view it is hard to believe that Christianity would have survived the lifetimes of the apostles.20 That
remark is relevant for the work of Armstrong as well, so it seems.
What do we understand by a “classic”?21 Or, as Armstrong says: “a great
spiritual and literary classic”? The word “classic” refers to a specific quality of a book, play or work of art that does not antiquate. Common examples are the plays of Shakespeare or the Iliad and Odyssey of Homer.22 Every new
generation can read those books or see those plays performed on stage and
18 Armstrong 1993, p. 164. 19 Mason 1995, p. 481. 20 Haldane 2003, p. 28.
Why I had not read Karen Armstrong (but I have now)
see new things in it, discover new shades of meaning. That makes those plays interesting, but does it make them “holy” or “sacred”?
As can be expected, Armstrong’s vision on biblical interpretation is inti-mately connected with her vision on what religion should be and what she considers a perversion of religion. Religion, so Armstrong tells us, “is a prac-tical discipline that teaches us to discover new capacities of mind and heart”.23 She also calls religion “a skill” that requires perseverance, hard
work and discipline. “Some people will be better at it than others, some appallingly inept, and others will miss the point entirely.”24
It is clear that with such a definition of religion any criticism of religion can easily be discarded as “missing the point” or “appallingly inept”. Actu-ally it immunizes religion entirely from any type of critique. Any justified criticism of religion is always criticism of something that aspires to be reli-gion but should be carefully distinguished from relireli-gion: fundamentalism. “Fundamentalism” Armstrong sees as the “perversion of religion”. It is something that we encounter in all religions. She writes:
The Western media often give the impression that the embattled and occasionally violent form of religiosity known as “fundamentalism” is a purely Islamic phenomenon. This is not the case. Fundamentalism is a global fact and has surfaced in every major faith in response to the problems of our modernity. There is fundamentalist Judaism, fundamentalist Christianity, fundamentalist Hinduism, fundamentalist Buddhism, fundamentalist Sikhism and even fundamentalist Confucianism.25
Then the question is: how do we discern true religion from fundamental-ism? In her book on fundamentalism she develops an analysis in which fun-damentalism is seen as the attempt in various religions to turn mythos into
logos. That means that religion in its pristine state is, according to Armstrong,
mythos.26But what serious believer could go along with that? This definition
of “fundamentalism” would make the overwhelming majority of serious believers “fundamentalists”. Only those who, like Armstrong, would be pre-pared to let their religious convictions evaporate into myths would be con-sidered believers in the positive sense of the word – the rest are all dubbed “fundamentalists”.
This view, often presented as “moderate”, is in fact extreme and some-thing no serious believer could subscribe to. But also from a non-religious analytical or scholarly point of view it is not very satisfying. It misses what believers consider important in their religion.
23 Armstrong 2009, p. 1. 24 Ibid., p. 4.
25 Armstrong 2002, p. 164.
Liberal believers usually react to criticism along the lines outlined above with an accusation. They say: “You argue just like the fundamentalists. Why shouldn’t it be allowed to modernize religious traditions? You want to rele-gate religious texts to the dustbin of history, don’t you?” But these are all ad hominem arguments, that is to say: fallacies and evasions of the issue. That someone argues “just like the fundamentalists” is irrelevant. The question is: are the fundamentalists right with regard to the issue that is under consid-eration? And it may well be that the fundamentalists are wrong as far as they are prepared to use violence, wrong in that they do not want to discuss their presuppositions, but right in the sense that their vision on what distinguishes sacred texts from classic books is much more convincing than the vision expounded by liberal believers. It is also an unfounded allegation that critics of the liberal approach to belief are motivated by the urge not to modernize religious traditions. The issue at stake is: how are we going to do that? The critics of religious traditions I have in mind advocate the open acceptance of moral autonomy. The vision on scriptural interpretation of moral secularists is that we accept or reject biblical values and norms on the basis of moral cri-teria, not religious criteria. The yardstick with which we measure the moral value of scriptural passages is itself not a principle derived from Scripture. And it is a principle of scholarly and moral integrity to make that clear.
The British philosopher (1902-1996) had some very pertinent advice for the interpreters of Scripture. “To interpret the gospels correctly you must read them with what may be called interpreter’s piety, that is, the will to receive into your mind the exact meaning the author intended, however strange or repellent or boring it may turn out to be.”27
It requires no elaborate argumentation that this is the exact opposite of what Karen Armstrong requires an interpreter to do. What Robinson calls “interpreter’s piety” means that you should try to ascertain as objectively as possible what is in the text (textualism) and/or what the intentions were of the person, group of persons or institution that made the text (intentional-ism). Armstrong has no patience for such an exercise. She (and the rabbis she admires so much) are not busy with the text but with their own moral ideas. But exactly that, so Robinson would argue, makes them violators of “inter-preter’s piety”. If you want to know what a “Christian ethics” looks like then you have to gauge what Christ said and what he meant, not what you, the interpreter, hope he said. “I urge you to do this, or at least not to use the phrase ‘Christian values’ until you have done it.”28
To further illustrate this point let us pay some attention to what Arm-strong finds so refreshing in the Jewish tradition of interpreting Scripture. She refers to the tradition of midrash and says: “Above all, midrash must be guided by the principle of compassion.”29 On the basis of this guideline
27 Robinson 1975, p. 142. 28 Ibid., p.142.
Why I had not read Karen Armstrong (but I have now)
Armstrong says, referring to the great Jewish religious leader Hillel (c. 60 BCE-10 CE): “The essence of Torah was the disciplined refusal to inflict pain on another human being. Everything else in the scriptures was merely ‘com-mentary’, a gloss on the Golden Rule.”30 Hillel also had a clear vision on the
way the Torah should be studied by the exegetes: “When they studied Torah, rabbis should attempt to reveal the core of compassion that lay at the heart of all the legislation and narratives in the scriptures – even if this meant twisting the original meaning of the text.”31 R. Akiba (c. 50-135 CE), the
lead-ing sage of the later Yavneh period, declared that the greatest principle of the Torah was the commandment of Leviticus: “Thou shalt love thy neighbor as thyself’” (Leviticus 19:18).
Many people will read this with approval. But the relevant question is:
what do we like in those words? What we like in those words, so it seems, are the moral guidelines that are being proclaimed, but that is crucially different from the theory of interpretation that is presented (if we are kind enough to call this a “theory” at all). The moral principles presented here are: (1) Compassion;32 (2) With as consequence refusal to inflict pain; (3) The golden
The German philosopher Arthur Schopenhauer (1788-1860), who made “compassion” the cornerstone of his ethics, would have been satisfied.33 And
his British colleague Jeremy Bentham (1748-1832) would have been happy with the consequence drawn from this principle: “Never inflict pain”. This means the rabbis were true followers of Schopenhauer or (on the basis of the second inference) utilitarians.34 And finally they were Kantians as well: they
advocated the golden rule.35 So apparently the Torah is a commentary on
Schopenhauer, Bentham and Kant.
Now, Hillel knew that there are things in the Torah that do not accord with Schopenhauer, Bentham and Kant. What to do with those passages? Well, the rabbis should attempt to reveal the core of compassion that lies at the heart of Scripture, according to Armstrong. But what if that appears impossible? In that case “twisting the original meaning of the text” is allowed, Armstrong approvingly remarks.
30 Ibid., p. 83. See also: White 1896, p. 293: “It can not be forgotten that Rabbi Hillel formu-lated the golden rule, which has before him been given to the extreme Orient by Con-fucius, and which afterward received a yet more beautiful and positive emphasis from Jesus of Nazareth.”
31 Armstrong 2007, p. 83.
32 See also ibid., p. 84: “R. Johanan had shown that, as Hillel claimed, charity was indeed central to scripture: it was the exegete’s job to elucidate this hidden principle and bring it to light.”
33 See: Schopenhauer 1840.
34 See on the ethics of Bentham: Bentham 1789.
What is the conclusion we have to infer from this “theory” of interpreta-tion? The only conclusion seems to be that in case of conflict between some passages of the Torah on the one hand and Schopenhauer, Bentham and Kant on the other, it is those philosophers who have the final word. The value of compassion, the principle never to inflict pain and the golden rule have the final word, not Scripture. Scripture is moulded according to those values and principles; the principles are not derived from Scripture and neither can they be abolished on the basis of Scripture.36
Richard Dawkins makes this point when he summarizes his treatment of this issue with the words:
My main purpose here has not been to show that we shouldn’t get our morals from scripture (although that is my opinion). My purpose has been to demonstrate that we (and that includes most religious people) as a matter of fact don’t get our morals from scripture.37
Does Armstrong agree with Dawkins?
There are some, although only a few, passages where it seems to dawn upon Armstrong that this “theory” is no theory at all. On the liberal interpretation of the rabbis so much admired by her she tells us: “To a modern scholar, this seems to violate the integrity of the text, and seeks meaning at the expense of the original”.38 But that conscientious objection (reminiscent to Robinson’s
“interpreter’s piety”) is silenced immediately in the sentence following upon the passage just quoted: “But the rabbis believed that because scripture was the word of God, it was infinite. Any meaning that they discovered in a text had been intended by God if it yielded fresh insight and benefited the community.”39
This is a kind of wordplay with the word “infinity”. Because God is “infi-nite” his Scripture is presupposed to be “infi“infi-nite” in meaning as well. Subse-quently we are being informed what this “infinity” in meaning implies: it gives a free license to the caste of interpreters to project into the text whatev-er benefits the community (or rathwhatev-er what they think benefits the communi-ty). But if Scripture is always interpreted against the background of the three principles mentioned before (show compassion; never inflict pain; apply the golden rule) it is far from “infinite”. It is highly restricted. If Scripture
com-36 Richard Dawkins comes to a similar conclusion when he writes in Dawkins 2006, p. 275: “we pick and choose among the scriptures for the nice bits and reject the nasty. But then we must have some independent criterion for deciding which are the moral bits: a crite-rion which, wherever it comes from, cannot come from scripture itself and is presumably available to all of us whether we are religious or not.”
37 Ibid., p. 283. See also ibid., p. 298: “the holy books do not supply any rules for distinguish-ing the good principles from the bad”.
Why I had not read Karen Armstrong (but I have now)
mands the destruction of another people (see for instance Numbers 31) the enlightened interpreters will tell us that this could not be the intention of the maker of the text. This is all fine, but it proves that the range of interpreta-tions is restricted.
There is another conclusion we have to draw from the passages quoted from Armstrong. That is that not only the rabbis are a combination of utilitar-ians, Kantians and followers of Schopenhauer but that God himself is as well. If Armstrong or the rabbis read something in Holy Scripture that contradicts the principles expounded by the philosophers mentioned, this will not be accepted as “divine”. So not a free divine will is authoritative for the liberal believers, but “the benefit of the community” is the real guiding line for interpretation. There is nothing against this, of course, but at the same time it is nothing special.
Antes, Peter, “Sagen alle Religionen dasselbe?”, in: Marburg Journal of Religion, Volume 12, No. 1, (May 2007), pp. 2-10.
Armstrong, Karen, A History of God: From Abraham to the Present: the 4000-Year Quest for God,
Heinemann, London 1993.
Armstrong, Karen, The Battle for God: Fundamentalism in Judaism, Christianity and Islam, Harper-Collins, London 2000.
Armstrong, Karen, Islam: A Short History, Random House, Toronto 2002, p. 164.
Armstrong, Karen, A Short History of Myth, Canongate, Edinburgh, New York, Melbourne 2005.
Armstrong, Karen, The Bible: The Biography, Atlantic Books, London 2007.
Armstrong, The Case for God: What Religion Really Means, The Bodley Head, London 2009.
Bentham, Jeremy, An Introduction to the Principles of Morals and Legislation, Edited by J.H. Burns and H.L.A. Hart, Methuen, London/New York 1982 (1789).
Bloom, Harold, Shakespeare: The Invention of the Human, Riverhead Books, New York 1998.
Bobbitt, Philip, The Shield of Achilles: War, Peace and the Course of History, Penguin Books, London 2002.
Haldane, John, An Intelligent Person’s Guide to Religion, Duckworth Overlook, London, New York, Woodstock 2003, p. 28.
Henrie, Mark C., A Student’s Guide to the Core Curriculum, ISI Books, Wilmington, Delaware 2000.
Hofstadter, Douglas, Gödel, Escher, Bach: an eternal golden braid, Basic Books, New York 1979.
Huxley, Thomas Henry, “Naturalism and Supernaturalism”, 1892, in: Thomas Henry Huxley,
Agnosticism and Christianity. And other Essays, Prometheus Books, Buffalo, New York 1992, pp. 92-118.
Kant, Immanuel, Grundlegung zur Metaphysik der Sitten, (1785), in: Werkausgabe, Band VII, Hrsg. W. Weischedel, Suhrkamp, Frankfurt am Main 1981, pp. 11-102.
Lewis, Joseph, The Ten Commandments: An Investigation into the Origin and Meaning of the Deca-logue and an Analysis of its Ethical and Moral Value as a Code of Conduct in Modern Society, Free-thought Press Association, New York, NY 1946.
Mason, Herbert, “Review of A History of God by Karen Armstrong”, in: The American Historical Review, Vol. 100, No. 2 (Apr., 1995), pp. 481-482.
McBrien, Richard P., The HarperCollins Encyclopedia of Catholicism, HarperCollins Publishers, New York, NY 1995.
Raven, Charles E., “Religion & Science: A Diagnosis”, L.T. Hobhouse Memorial Trust Lecture, No. 16, delivered on 1 may 1946, in: Hobhouse Memorial Lectures 1941-1950, Oxford University Press, London 1952, pp. 3-16.
Robinson, Richard, An Atheist’s Values, Blackwell 1975.
Schopenhauer, Arthur, Über die Grundlage der Moral, in: Sämtliche Werke, Band III, Cotta-Verlag/ Insel-Verlag, Stuttgart/Frankfurt am Main 1976 (1840), pp. 631-815.
Thomas, Scott M., “Review of The Battle for God: fundamentalism in Judaism, Christianity and Islam”, in: International Affairs, Vol. 77, No. 1, (Jan., 2001), pp. 194-196.
White, A.D., A History of the Warfare of Science with Theology in Christendom, Volume 2, Dover Publications, New York 1960 (1896).
“1. We regard the state as an agency whose positive assistance is one of the indispensible conditions of human progress.”1
The way in which economists have looked at the state and its effects on the economy has fluctuated substantially over time.2 Nowadays, economists
tend to see the market as a default option for social order, and a role for gov-ernment only when markets fail. In contrast to what is the first substantial article in the first constitution of the American Economic Association – the most influential among associations of economists – governments are largely seen as affecting the workings of an economy negatively if and when they do more than a ‘Nightwatch state’ would.
Markets are typically believed to fail under circumstances of (excessive) externalities that are either positive or negative, in cases when public goods are traded, in case of increasing returns or a natural monopoly creating mar-ket imperfections, or, possibly, according to some, to correct unequal distri-bution of wealth or income. Social and institutional economic thinking has been much more amenable to a role for government in the economy. Its role, when explicitly investigated, is seen as benevolent in principle. Institutional economics recognizes that markets cannot function if not embedded in a broader set of interrelated institutions.
Developing a convincing analysis of the role of government in economic processes, however, needs to start by considering government failure. Gov-ernment failure is not the flip-side of the coin of market failure: there is no theory of ‘non-market failure’ as of yet.3 Drawing on insights from law, the
■ Wilfred Dolfsma is full professor at the University of Groningen, specializing in
innova-tion. University of Groningen, School of Economics and Business, PO Box 800, 9700 AV Groningen, Netherlands, ph. +31-50 363 3453, fax. +31-50 363 7110, email@example.com. 1 From: Article III (Statement of Principles) of the Constitution By-Laws and Resolutions of
the American Economic Association, Publications of the American Economic Association 1(1) (March 1886), pp. 35-46, at p. 35.
2 Medema 2003.
3 Cf. Wolf 1997. Wolf (p. 64 ff), however, perceives of non-market failure as circumstances that lead to a rise in the price of the services it offers. He thus seems to follow the logic of determining market failures – externalities, public goods, increasing returns, and possi-bility merit goods – that are also apparent by their effect on what would otherwise be a ‘natural’ price.
philosophy of law and law & economics, I will develop some ideas to under-stand government failure. To see what exactly the role of government can be in an economy, one can approach from the opposite direction, asking ‘When will a government fail?’ I propose a framework for understanding govern-ment failure from a social and institutional perspective. I thus identify and develop 4 different types of government failure. Government can set rules4
for economic processes and actors that are (1) too specific, (2) too broad, (3) arbitrary, or (4) that conflict with other rules it has set out to address other, related issues (possibly primarily non-economic). This possibly non-exhaus-tive list of government failures gives rise to different kinds of problems for the economy, which I will elaborate upon in the paper in the context of Intel-lectual Property Right (IPR) and Anti-trust laws in particular.
Rules in the Economy
By now the importance of rules to understand the economy and economic developments has become pretty clear to most economists. Even hard core neoclassical economists, who have a deep-seated antipathy against any role for government in the economy, acknowledge this. Some of these maintain that an orderly and thus rule-governed economy can do without rules issued and maintained by a government,5 but most acknowledge that a
govern-ment is necessary or even a prerequisite for most modern economic activi-ties.6 A role for the government is mostly acknowledged addressing issues
related to property rights, including intellectual property rights, and also in relation to contract law. Even when a national government is absent or weak, players with authority and legitimacy underpinning a set of rules make eco-nomic development and prosperity more likely. Sometimes this has been an non-government authority such as the Catholic Church, especially during the Middle Ages.7
Even within an otherwise connected economic and legal sphere, such as the United States in the early decades of its existence, rules set out by a (local) government can profoundly affect the direction in which economic develop-ment is headed as well as overall levels of income or income distribution.8
The basis for contract law in Massachusetts has been the will theory, whereas the basis in Virginia was the fairness doctrine from the common law tradi-tion. While the latter allows a judge to annul a contract after it has been agreed upon by the parties, the former does not. Trade and investment are more likely to render benefits in the former.
4 I will use the terms laws, rules, and standards interchangeably and as an institutional economist discuss them as (formal) institutions (cf. Dolfsma 2004, 2009).
5 Ellickson 1991.
6 Glaeser & Shleifer 2002; Greif 2006. 7 Ekelund et al. 1996.
Government Failure – 4 types
This strongly suggest that, for a number of possible reasons, the rules that a government sets out are less than fully plastic.9 If at all economists in
the neoclassical tradition acknowledge a role for government, the often implicit assumption is that the rules it sets and even the kind of government in existence are fully plastic such that optimal outcomes can always be attained by tweaking them.10 Social and institutional economics are more
likely to acknowledge the path dependence of (government) rules than many other lines of thought within economics,11 being prone to the
unavoid-ability of incomplete laws either as an unavoidable necessity but also as something purposefully sought.12
The proper role of the government is an issue of ideological discussion that goes to the heart of people’s convictions of a politico-philosophical nature. At the extremes are the idea that government should be a night watch-state focusing on the issuing of a minimum of rules related to commerce and (national) safety, on the one hand, and the idea that the state should be con-cerned with the proper functioning of the economic system on the other hand. The extreme position among the latter is associated with communist ideas. In the current economic situation of economic crisis13 far less
far-reaching ideas most prominently advocated with John Maynard Keynes have gained quite a bit of currency once again. The proper functioning of the economy is then seen to relate to issues of addressing systemic risks to the economy that are to be expected when the banking system or the automobile industry fail as a whole.
Whatever one’s views, scholars will recognize that government14 must
formulate rules for the functioning of society. In a Marshallian tradition, gov-ernment failure is discussed in terms of the effects of any particular set of rules formulated by the government. When rules of a government lead to a concentration of (political) power in the hands of a few, when they lead to overly bureaucratic administration, or when they give rise to a government which is unaccountable, government may be said to have failed.
In my discussion here, I conceive of a government failure in four differ-ent ways, inspired by insights from the philosophy of law. I thus do not address the effects of government rule-setting activity, but I take a look at the
9 Acemoglu et al. 2001; Acemoglu et al 2005. 10 Cf. Niskanen 2003.
11 Wunder & Kemp 2008. 12 Fon & Parisi 2007. 13 Dolfsma & McCarthy 2009.
nature of the rules. Government failure then in no way is the inverse of mar-ket failure as discussed by economists. When discussing the possibilities and kinds of government failures, I will not assume a rules-free State of Nature or situation behind a Veil of Ignorance. I will rather discuss possible govern-ment failure in the face of any particular set of existing set of rules.15 The
failure will thus be in a context of a government changing, adding or taking away rules, which may turn out to cost some, or even all members of society and benefit others. In addition to the considerations of a consequentionalist kind about the greatest good for the greatest number, there there may be dis-tributional issues that affect parties’ deontological claims. Rather than mere-ly addressing this from a consequentionalist perspective, as Fon and Parisi (2007) do, my approach is more of a deontological one.
The non-exhaustive list of four different ways in which government can fail will draw mostly on scholarly work in philosophy of law. When formu-lating rules, then, government can be (1) too specific, (2) too broad, (3) arbi-trary, or (4) setting out rules that conflict other rules it has set out to address other, related (possibly primarily non-economic) issues.
Obviously these categories relate to Sullivan’s (1992) distinction between rules and standards along a ‘continuum of discretion’. Rules, as Sullivan refers to them, offer less discretion than do standards since they “bind the decision maker to respond in a determinate way to the presence of delimited triggering facts”.16 Standards suggest decision makers to refer back to
back-ground principles. The discussion of a government setting out more or less general rules seems to be alluding to what is known as a Roman law tradi-tion, rather than a case law tradition. In Anglo-Saxon case law, jurisprudence proceeds as verdicts are expressed and explained in courts about particular cases with reference to cases that have been decided upon previously. In the Roman law tradition that prevails in other parts of the world, the legislative part of government sets out general rules that civilians follow. The executive part of government sees to it that the rules are followed, whereas the third part of the trias politica or separation of powers, the judiciary part may fur-ther elaborate on the rules set out.
Too specific rules
Ehrlich & Posner argue that “a perfectly detailed and comprehensive set of rules brings society nearer to its desired allocation of resources by discourag-ing socially undesirable activities and encouragdiscourag-ing socially desirable ones.”17
In their view, rules cannot be too specific. Specific rules reduce uncertainty by making a ruling in a dispute more predictable as decision makers are led to act more consistently by being involved in a more transparent process. The cost of the legal process is also reduced as the speed with which a final
15 Cf. Hamilton 1932; Dolfsma 2009.
Government Failure – 4 types
judicial resolution is reached. Yet, the many attempts at deregulation by a number of governments in recent years may be perceived as an attempt to reduce the specificity of rules and in doing so correct an undesirable situa-tion. A rule being too specific is then seen as one kind of government fail-ure.
Overly specific rules require that a government has to consider vast amounts of information of very diverse kinds. Rules formulated by the gov-ernment that are very specific will soon become obsolete as circumstances change and need re-formulation.18 Not only will this be costly in itself for the
rule-maker, but uncertainty to society is increased. Overly specific rules, even when addressing a specific set of phenomena that is limited in number, will, however, relate necessarily and intimately with a other rules. Such interrelatedness will make changing a rule difficult and costly, even if no conflict between rules is involved. Changes to a rule may necessitate re-con-sidering the other, related rules. Changing other rules too will hurt agents, who will need to be compensated if a government is not to become an unpre-dictable bully making arbitrary decisions. Such arbitrariness may affect the principle of equality before the law and the legal security that has been shown to foster commerce.19 This applies perhaps in particular to rules
relat-ed to property, bankruptcy, labour contracts, and enterprise.20
Too broad rules
A government (the legislative) that formulates rules in very broad terms, without giving guidance about their interpretation or without an authority that may provide such interpretation may be said to fail. In an attempt to have subjects behave in accordance to the spirit of the rules rather than con-sider anything that is not forbidden by specific rules fair game. (Specific) rules do produce rascals.21 If rules are too broad, however, the ‘rules’ of the
jungle apply. The weaker party – economic actors such as consumers, employees, and SMEs – may be hurt.
Broad rules may prevent over-inclusion or under-inclusion of situations in a category of events to which a rule applies as very specific rules may ‘sup-press relevant similarities and differences (Sullivan 1992, 66). Broad rules (or standards, such as ‘reasonableness’ or ‘efficiency’) are open-ended and allow for more discretion and flexibility but are possibly more costly to apply as a substantial amount of information needs to be gathered and processed for each case at hand. Specific rules, on the other hand, remove from considera-tion specific kinds of circumstances, thus allowing for (more) direct applica-tion. Yet, specific rules are costly to promulgate, as a government setting such a rule needs to consider a priori what kinds of circumstances are to be ignored
18 Ehrlich & Posner 1974. 19 Greif 2006.
and what the consequences of such ignorance will be for the decisions made, and allow for less flexibility. In particular when events to be regulated are rather heterogeneous or change rapidly, broader rules may be preferred. Decision makers will feel more compelled to explain their judicial decisions.
A government that sets out rules for society and the economy of an arbitrary kind is what Margalit calls a government that rules an indecent society.22 It is
a society that humiliates its citizens (burghers) but that also stifles commerce, which is exactly what the Magna Carta of the year 1215 was to curb. Arbi-trary rules increase uncertainty in the economic realm as well. The likelihood that returns on an investment can actually be enjoyed is reduced as rules set by government are more arbitrary. Investment levels will be lower.
Any practice23 in society is likely to have multiple dimensions and may then
be affected by rules set out by government promulgated to address specific issues relevant for parts or elements of that practice. What behaviour may be deemed desirable as stipulated by one set of rules may be behaviour that is undesirable from the perspective of another set of rules. If and when the behaviours in practice are inseparable, the actors involved in the practice may find themselves in a bind. Uncertainty is increased the sharper the con-flict between rules.
The four ways in which governments may fail can, in actual fact, relate to each other, of course, and will come out best in analyzing a situation where two sets of rules conflict.
Policy: Innovation & Competition
The relation between innovation and competition is unclear empirically (see Table 1).24 Government policy is involved in setting out rules that affect both
these sides of the equation, however. Competition is stimulated most perti-nently by anti-trust laws, while one pertinent set of rules meant to stimulate innovation is intellectual property rights (IPRs). Anti-trust laws seek to limit the extent to which a single firm may control a particular market and thus set monopoly prices, or at least prices substantially higher than what would otherwise be the case. IPRs give right holders the exclusive right to exploit commercially the material they can claim property rights over. While this may not in fact be a monopoly, it could give right holders the possibility to
22 Margalit 1996.
Government Failure – 4 types
charge higher prices and recoup investments in both R&D efforts and pro-duction capacity. The extent to which this is motive for having a system of IPRs actually makes sense empirically is disputed.25 At the same time, of
course, IPRs are to stimulate diffusion of newly developed knowledge by requiring publication of knowledge if it is to receive protection.
Table 1: Competition and Innovation Related – Findings from selected studies
Aghion & Howitt (1992) Innovation intensity decreases as competition intensity rises Aghion et al. (2005) Inverted-U
Blundell et.al. (1995 ) Competition stimulates innovation
Boone (2000) Increased competition will not lead to both product and process innovation
Caballero & Jaffe (1993) Innovation intensity decreases as competition intensity rises Cohen & Levin (1989 ) Relation market structure & innovation fragile
Geroski (1990) Monopoly market structure does not stimulate innovation Kamien & Schwartz (1975) Unclear relation between competition and innovation Symeonidis (2001) No evidence that price competition benefits innovation
Source: Dolfsma & Van der Panne (2009).
The extent to which these two sets of rules conflict has changed over time. In the past, anti-trust laws in Europe, for instance, have been promulgated with a view to protect incumbent firms rather than protect or even stimulate small firms and entrants into a sector.26 IPR rules have also changed over time.27
The general direction that these have moved into has produced a situation in which tensions between them have increased. The possibility of government failure then looms large.
There are, to wit, several ways in which (the potential for) anti-competi-tive behaviour can be detected. One is the Small but Significant Non-transitory Increase in Price (SNIPP) test, also called the Hypothetical Monopolist Test. Giv-en a proper definition of the relevant market – no small feat – the effects of a hypothetical increase in the price of one good offered in a market by some 5 to 10 percent is determined. If a player can do so permanently without see-ing its customers move to a competitor, that player is a monopolist or has such powers to a degree. At issue are what value price and cross-price elas-ticities exist for a good that one agent offers on the market. This test takes the demand side of a market into consideration, focusing only on price competi-tion between relatively homogenous goods. An industry’s history as a monopoly where higher than usual prices are charged already cannot be indicated with the SNIPP test: only (hypothetical) changes to the current
sit-25 See Dolfsma 2005, 2008 for a discussion. 26 Pace 2007.
uation are considered. In addition, the industry may face potential entry and thus be contestable without being actually contested.28 It is unclear if and
how the contestability of a market will show in a SNIPP test.
Agents in an industry that behave in the way in which a monopolist with market power would may show other behaviours than the ones detected by the SNIPP test. Firms can, for instance, employ limit-pricing as a strategy to both keep prices higher than would be the case while at the same time sti-fling competition as the price set is lower than an entrant could possibly hope to charge given the lower economies of scale (and thus higher average costs per product) that an entrant can expect in the first phase after entry. Such behaviours can have real, negative effects on both consumers and (potential) competitors, and may thus be deemed undesirable. If rules in the domain of anti-trust law were to be too specific so as to only use the SNIPP test to detect anti-competitive behaviour, such behaviours would not be seen or found undesirable.
Being too strict in ruling out such behaviour by promulgating rules that specifically aim to prevent this may be an instance of government failure. Take the example of newspapers made available free of charge. The price they charge – €0 – may be deemed a limit price to deter entry. Doing so would, however, ignore the nature of this market as a two-sided market.29
Firms in this industry offer goods on a market that actually cater to two mar-kets at the same time. Readers are interested in them to consume both news and possibly product information advertised. Advertisers are interested in getting attention for their products. The lower the price charged by the pro-ducer of a newspaper, the larger the number of consumers will be and thus the audience for the advertisers. The income for the producers of free news-papers comes from one side of the market only. Conceiving of the €0 price as a limit price would in this case be unreasonable.
A broad measure of possible anti-competitive behaviour, such as the so-called Lerner index that simply tries to fathom directly the extent to which a firm in an industry is able to charge a price well in excess of marginal costs,30
may be problematic too. Such a test would pinpoint as undesirable a kind of behaviour that might well be entirely fair, such as bundling or rebates.31
A broad rule to detect anti-competitive behaviour such as one based on the Lerner index might in particular point to unbecoming behaviour that is allowed by IPR as a consequence of a firm’s innovative efforts that have led to patentable knowledge. If and when such knowledge is used in producing goods and services for which there is a market, the price asked in that mar-ket might be substantially higher than marginal cost. Which set of rules is to apply? Will the application of any of the two not be arbitrary? Will players
28 Baumol 1982.
29 Dolfsma & Nahuis 2006; Rysman 2009. 30 Cf. Boone 2000.
Government Failure – 4 types
involved not be left in uncertainty? Is the correct conclusion to develop more concrete rules to address situations where rules conflict, and might these become overly specific? Or, rather, should broad rules apply that provide guidance but do not reduce uncertainty?
In this brief contribution to the Liber Amicorum for Aernout Schmidt, as he retires as professor of IT and Law from eLaw@Leiden, I have endeavored as an economist (and philosopher) to venture into the field of law. The field of law, for an economist, is fascinating as much as it is foreign. The logic applied and the material deemed relevant can be very different from that trained into as an economist. Nevertheless, or perhaps rather because of this, I have found discussing and working with Aernout very stimulating and a pro-found learning experience.
It is in grappling with the logic of legal scholars that I have come to look at the role of government as a rules-setting agent differently from how an (institutional) economist, however much one may be aware of the inescapa-ble presence of rules in an economy, would be looking at it. This has led me to suggest that, in promulgating rules, there are four ways in which a gov-ernment can fail. The rules a govgov-ernment imposes on society and the econo-my can be (1) too specific, (2) too broad, (3) arbitrary, or they can (4) conflict. When a government fails in such a way, the cost to economic actors is real as a discussion of the realms of anti-trust and intellectual property right laws may suggest.
Acemoglu, Johnson & Robinson 2001
D. Acemoglu, S. Johnson & J.A. Robinson (2001) “The Colonial Origins of Comparative Devel-opment: An Empirical Investigation.” American Economic Review91(5): 1369–1401.
D. Acemoglu (2005), ‘Institutions as the Fundamental Cause of Long-Run Growth’, in: Ph. Aghion & S. Durlauf (eds.) Handbook of Economic Growth. Elsevier, North Holland, p. 385-472.
Anonymous (2009), ‘The Unkindest Cuts’, in: The Economist, August 20, p. 62.
W.J. Baumol (1982), ‘Contestable Markets: An Uprising in the Theory of Industry Structure’, in:
American Economic Review 72(1), p. 1-15.
J. Boone (2000), ‘Competitive Pressure: The Effects on Investments in Product and Process Inno-vation’, RAND Journal of Economics 31(3), p. 549-569.
W. Dolfsma (2005), ‘Towards a Dynamic (Schumpeterian) Welfare Economics’, Research Policy
34(1), p. 69-82.
W. Dolfsma (2008), Knowledge Economies, London & New York: Routledge.
W. Dolfsma (2009), Institutions, Communication and Values, Houndsmills: Palgrave Macmillan.
Dolfsma & Van der Panne 2009
W. Dolfsma and G. van der Panne (2009), Innovation, Industry Structure and Industry Dynamics, Mimeo.
Dolfsma & Nahuis 2006
W. Dolfsma and R. Nahuis (2006) ‘Media & Economics: Uneasy Bedfellows?’ The Economist
154(1), p. 107-124.
Dolfsma & McCarthy 2009
W. Dolfsma and K.J. McCarthy (2009) ‘What’s in a name? Understanding the language of the credit crunch’, in: Journal of Economic Issues 38(2).
Ehrlich & Posner 1974
I. Ehrlich and R.A. Posner (1974), ‘An Economic Analysis of Legal Rulemaking: An Economic Analysis of Legal Rulemaking’, in: Journal of Legal Studies 3(1), p. 257-286.
Ekelund et al. 1996
R.B. Ekelund Jr, R.F. Hébert, R.D. Tollison, G.M. Anderson and A.B. Davidson (1996), Sacred Trust: The Medieval Church as an Economic Actor, New York: Oxford UP.
Ellickson, R. (1991), Order without Law: How neighbors settle disputes, Cambridge, MA: Harvard UP.
Fon & Parisi 2007
V. Fon and F. Parisi (2007), ‘On the optimal specifi city of legal rules’, in: Journal of Institutional Economics 3(2), p. 147-164.
Glaeser & Shleifer 2002
E. Glaeser and A. Shleifer (2002), ‘Legal Origins’, in: Quarterly Journal of Economics 117, p. 1193-1230.
A. Greif (2006), Institutions and the Path to the Modern Economy,Cambridge UP.
W. Hamilton (1932), ‘Institutions’, in: Encyclopedia of the Social Sciences, eds. E.R.A. Seligman & A. Johnson, vol. 8, New York: MacMillan, p. 84-89.
S. Kim (2009), ‘Institutions and US regional development: a study of Massachussets and Vir-ginia’, in: Journal of Institutional Economics 5(2), p. 181-205.
Le Grand 2003
J. Le Grand (2003), Motivation, Agency and Public Policy: Of Knights and Knaves, Pawns and Queens, New York: Oxford UP.