|Multi-Stakeholder Public Policy Governance and its Application to the Internet Governance Forum|
This does, however, suggest the way forward: a hybrid between hierarchical ordering in the form of meritocracy, and a more participatory form of anarchistic, democratic or consensual ordering, to fill the normative holes in the hierarchical option, while retaining many of its benefits (such as the greater efficiency of a smaller governance body). Such a mixed system of governance is in fact precisely what Aristotle recommended. It is also widely seen in Internet governance. ICANN, most notably has been described as a “semi-democracy,” combining hierarchical and democratic elements, through the composition of its board which is drawn partly from the meritocratic Supporting Organisations and partly from the At Large community. The same idea is found in other organisations in which a standing committee is appointed alongside elected members, for example in the Wikimedia Foundation and the W3C.
Another example of an effective hybrid of hierarchical and participatory forms, as foreshadowed at the close of the discussion of anarchism, is the case of co-regulation.
Co-regulation illustrates a possible compromise between anarchistic forms of ordering (by norms, markets and architecture) and governance by rules, in which decentralised collective action is guided or directed by government (or to generalise this case, by some other hierarchical authority). To be more specific, co-regulation is the process by which an industry or industry segment is permitted to draft its own code of conduct on a particular issue, which if acceptable to the executive agency responsible for regulating that issue area, will be “registered” by it to serve in lieu of government regulation. Once registered the code applies to the entire industry sector in question, so that even those who are not signatories to it can be directed by the agency to comply with it.
There are numerous possible variations of this model along a continuum between pure hierarchical ordering and pure decentralised collective action (or between “command and control” and self-regulation, in simpler if less precise terms), and these are sometimes known by other names such as “enforced self-regulation” and “policy co-ordination,” but the name and description given reflect the dominant practice in Australia.
Examples of co-regulatory regimes already in place in Australia include the various codes on topics such as billing and customer complaints developed by Communications Alliance Ltd for the telecommunications industry under the Telecommunications Act 1997 (Cth), the Internet content regulation regime established under the Broadcasting Services Act 1992 (Cth) and drafted by the IIA for the Internet industry, and two codes under the Spam Act 2003 (Cth), one of which was drafted by a committee of the IIA for the Internet industry and the other by the Australian Direct Marketing Association (ADMA) for the direct marketing industry. In all of these cases, the government agency responsible for the registration of the codes is ACMA.
The benefits of co-regulation can be described by comparison to either of the pure forms of which it is a hybrid. Over pure hierarchical organisational forms, it offers many of the same benefits as self-regulation, such as its greater speed and reduced expense over traditional governmental regulation, the ability of industry to develop or modify codes swiftly in response to environmental stimuli, as well as the pull towards voluntary compliance that is associated with governance by norms.
As for the benefits of co-regulation over anarchistic forms of ordering, the ability for compliance with a co-regulatory code to be independently enforced addresses the limited effectiveness of anarchistic ordering that results from its voluntary nature. Although a registered co-regulatory code does not have the full force of law, pursuant to section 121 of the Telecommunications Act 1997 (Cth), a member of an industry covered by a code can be directed to comply with its provisions by ACMA. It is an offence to fail to comply with such a direction.
The substantive content of the code is also more likely to reflect public policy concerns, rather than serving only the interests of its drafters as is often found in cases of pure self-regulation. This is achieved in much the same way as in the case of directives of the European Union, whereby the government regulator specifies certain minimum outcomes that code is required to achieve, but not how those outcomes are to be achieved, which is left to the discretion of the industry.
The problems of accountability and transparency associated with anarchistic ordering can also be addressed in co-regulatory structures, by establishing systems for the regulator to monitor compliance and for complaints to be independently heard. For example, clause 12 of the Internet Industry Spam Code of Practice drafted by the IIA provides that consumers may make complaints about an ISP’s breach of the code to ACMA, which will refer them to the IIA or the Telecommunications Industry Ombudsman (TIO) for determination.
Since these are all benefits to government more so than to industry, it is a misapprehension to consider that phenomena such as co-regulation represent a loss of power by states to the private sector. Rather, the sharing of state authority with private actors is a process for which states are largely responsible, and which serves their own ends first and foremost.
However whilst addressing some of the shortcomings of each of the pure regulatory forms, the co-regulatory form does introduce or exacerbate certain other problems. These include the risk of regulatory capture, and the inherent incentive for industry to “cheat,” for example by writing loopholes into its codes.
These dangers underline the need for broadly-based oversight of co-regulatory arrangements, from civil society as well as government. For example section 117 of the Telecommunications Act requires codes registered under that Act to be subjected to an open process of public consultation. All codes registered to date have also been subject to regular review, with the first review of the Spam Code for example taking place one year after its registration.
The model of domestic co-regulation could in principle be extended to the international arena, as self-regulatory arrangements are naturally extensible transnationally, as for example in the case of the International Bar Association’s International Code of Ethics. However in practice this is complicated by the limited choice of international authorities to assume the regulator’s role. Although there may already be an appropriate regulator in some issue areas, such as the WTO (which with the assistance of its members could transform international commercial arbitration into a co-regulatory regime), in other issue areas such as Internet governance new intergovernmental agreements may be required to establish a regulatory framework.
For this reason there are few existing international or transnational examples analogous to domestic co-regulation, but the European Union’s CE mark found on consumer and industrial goods offers one. The requirement for goods sold within the European Union to conform to EU standards and to carry the CE mark is mandated by EU resolution, but a product’s conformity to those EU standards is self-assessed by or on behalf of the product’s manufacturers, who must create a test report and declaration of conformity to support their assessment.
Hybrid regulatory models are found in the context of Internet governance also. Most significantly, ICANN remains contracted until at least 2009 to the NTIA, which allows ICANN to manage the DNS essentially independently, while the NTIA retains ultimate authority over the DNS root.
auDA provides another good example. The process by which control of the au ccTLD passed from a pure self-regulatory regime under Robert Elz and later ADNA, to auDA has already been described. In particular it was noted that this was facilitated by NOIE, a Commonwealth government agency, and that the Commonwealth reserved authority to itself under the Telecommunications Act 1997 to take over from auDA in the event that it ceased to act effectively.
In the context of the IGF, the scope for a co-regulatory approach can be found in the fact that one of the concessions made by governments in the Tunis Agenda was that the issues of DNS management and IP address allocation would be left outside the IGF’s mandate, and remain under the private management of the ICANN regime. There is no reason why the governmental stakeholders in the IGF could not similarly agree to leave other issues to be regulated through the decentralised collective action of the stakeholders at large, whilst retaining ultimate authority to intervene on a domestic or intergovernmental level should decentralised collective action fail to adequately address the issues in question.
Would an IGF structured in such a manner, as a hybrid between the hierarchical power of governments and the anarchistic ordering of all other stakeholders, still amount to a governance network as it has been described in this thesis? It is not exactly the hybrid between meritocracy and decentralised collective action that was previously considered, as it substitutes governments for a meritocratic elite drawn from amongst all stakeholders. This is in one way indefensible, in that it privileges one stakeholder group over the others; a stakeholder group that we have already found lacks the legitimacy to exercise authority over transnational public policy issues.
Yet in another way, it could be argued that if it is necessary to concede to hierarchical ordering in order to address some of the identified limitations of anarchistic ordering, governments are in a better practical position to hold this elevated position than any of the other stakeholder groups. After all, it is they who can most effectively wield the coercive power of rules. And to allow governments to wield hierarchical power would neatly side-step the dilemma of how to select a meritocratic elite to do so. Whilst it was vaguely suggested above that such an elite could be selected through democratic or consensual means, most governments can be presumed already to have been selected by such means (though admittedly not in respect of transnational issues). Why then should it be necessary to reinvent the wheel? Reflecting this view, former ICANN President and CEO M Stuart Lynn has argued,
Although governments vary around the world, for better or worse they are the most evolved and best legitimated representatives of their populations—that is, of the public interest. As such, their greater participation in general, and in particular their collective selection of outstanding non-governmental individuals to fill a certain portion of ICANN Trustee seats, could better fill the need for public accountability without the serious practical and resource problems of global elections in which only a relatively few self-selected voters are likely to participate.
If this view were to prevail, it would be that all stakeholders are equal within the IGF, but that some are more equal than others. Perhaps, however, this is the only practical outcome. The following discussion of hierarchy within open source software development may provide an insight into that suggestion.
Although the burgeoning success of open source software and the philosophy underpinning it has been often described as the “open source revolution,” open source software is actually nothing new; in fact it is older than proprietary software. Levy describes how even in the late 1950s and early 1960s, software for the first generation of minicomputers was made available “for anyone to access, look at, and rewrite as they saw fit.”
Another common observation is that it is no coincidence that the rise of open source software has coincided with that of the Internet. As never before, the Internet facilitated the development of open source software en masse by geographically distributed groups of hackers. But the relationship goes back still further, as the technical infrastructure of the Internet was itself largely built on open source software—even before it was known by that name. Prior to the term “open source” being coined in 1998, it was more commonly known simply as “free software.”
However, the software is free in more than one sense. Free or open source software is in the FSF’s words not only free in the sense of “free beer,” but also in the sense of “freedom,” encompassing:
The freedom to run the program, for any purpose (freedom 0).
The freedom to study how the program works, and adapt it to your needs (freedom 1). Access to the source code is a precondition for this.
The freedom to redistribute copies so you can help your neighbor (freedom 2).
The freedom to improve the program, and release your improvements to the public, so that the whole community benefits (freedom 3). Access to the source code is a precondition for this.
Although it is not required in order to satisfy this definition, certain open source software licences, most notably the GNU General Public License (GPL) which is used by a majority of all open source software (see FSF, GNU General Public License (1991)), require any work copied or derived from software covered by the GPL to be distributed under the same licence terms. This characteristic is referred to by the FSF as “copyleft,” as a play on “copyright,” in that it requires those who base their own works on copyleft-licensed software to forgo the exclusive rights that copyright law gives them to copy and modify their works, and to share those rights freely with the community.
More significant than the freedoms associated with open source software are the larger cultural and organisational consequences to which their exercise gives rise. These include the widespread voluntary service that members of the open source community provide in coding and documenting the software projects to which they contribute, and the typical high quality, timeliness and innovation of their output.
Eric Raymond, a hacker himself, has famously described the difference between the development methodology for proprietary software and that for open source software as that between “the cathedral and the bazaar,” in his essay of that name. To be built like a cathedral, in that context, is to be “carefully crafted by individual wizards or small bands of mages working in splendid isolation, with no beta to be released before its time,” whereas the bazaar style of development was epitomised by the Linux kernel development process, which
seemed to resemble a great babbling bazaar of differing agendas and approaches (aptly symbolized by the Linux archive sites, who’d take submissions from anyone) out of which a coherent and stable system could seemingly emerge only by a succession of miracles.
The same phenomenon of “peer production” has begun to propagate beyond software development into other fields. It has already been observed in the hours that hundreds of contributors devote each week to the Wikipedia project, producing the most comprehensive encyclopædia ever written. The licensing model employed by Wikipedia is equivalent to that of open source software, although the material licensed may be more accurately described as “open content,” and the license employed is the GNU Free Documentation License (GFDL).
There are, of course, other open content licences. Creative Commons is a project to draft and promote licences suitable for the release of all manner of literary, musical, artistic and dramatic works as open content. The Creative Commons Web site makes some of this content available, though Creative Commons licensed content is also found on many other sites including the Internet Archive and the OpenCourseWare project, inaugurated by MIT and since extended to other institutions for the publication of course materials.
The success of the open source development methodology is often explained by economic sociologists in terms of the low transaction costs associated with communication between developers, and the network effects which increase the value of the open source “commons” to all as more people become involved. Although puzzled as to what individual incentives developers have to voluntarily build up this open source commons, they posit that it is a barter or gift exchange system in which developers exchange their labour for such goods as feedback from users and an enhanced reputation amongst their peers, or that it is a means of improving their future employment prospects.
To developers such as Raymond the question is less of a mystery: they do it because it is fun.
Linus Torvalds, original author of the Linux operating system kernel, concurs with this view in his autobiography (which is suitably enough titled Just For Fun), as does Levy in his history of the hacker community. Software development is only one application of the open source ethic, but the fun extends to publishers of other forms of open content too: Jimmy Wales of Wikipedia for example unpretentiously states, “The goal of Wikipedia is fun for the contributors.”
The same motivation also extends to projects small enough to be pursued by a single developer. Whilst these might not be thought of as organisations, lacking a community of developers, they are still aimed at a community of users or readers and thus fulfil similar social needs as more structured virtual communities. Take the example of blogs (“Web logs”); self-published online journals numbering over 100 million as at 2008. Tim Wu observes that “in general, bloggers writing for fun—or out of single-minded obsession—can thump reporters trying to get home by 6pm.”
But what underlies the fun? It might be argued that it is inherent in the creative process, but that begs the question, what underlies that?
At least to some extent, the answer is empowerment: the power to independently create or achieve something one perceives to be of value. The desire for such power is known by psychologists as a mastery, competence or achievement motive, and Maslow placed it at the pinnacle of his hierarchy of human needs, naming it the need for self-actualisation. Sociologists as far back as Weber came to the same realisation that increasing the bureaucratic rationalisation of work could be dehumanising; Weber describing this trend as an “iron cage” in which humanity was destined to be trapped. Scholars of organisational behaviour have inherited this insight, and proposed strategies by which employees can be empowered (and thus made happier and more productive) by increasing their autonomy at work.
Although the emergence of the open source methodology has been quite orthogonal to this scholarship, it is an exemplar of its programme in the extent to which it empowers the members of the open source community to pursue their own objectives, in their own way, in a manner that is not possible within an hierarchical bureaucracy.
It follows that the licence under which open source software is released, as important as it may be to the success of the software and to the movement as a whole, is not the most critical factor in its success as a software development methodology; rather, it is the empowerment of its contributors that is central. The licence is simply the means by which hackers have institutionalised in law (or rules) the ethic that “all information should be free” in respect of open source software and open content, as they embedded it in the architecture of the Internet in respect of data communications.
On this basis, the egalitarianism of the open source software development model can be seen as reflecting that of the Internet itself. Both are models of anarchistic ordering largely of hackers’ own creation. Thus as already observed it is no coincidence that the Internet is an enabling force for the open source paradigm, levelling the playing field between media juggernauts and software powerhouses, and teenagers writing or coding in their attic. Freed of the hegemony of hierarchy, hackers and others pursing their need for self-actualisation become more empowered, fulfilled and happy.
However, to characterise the open source software development model as purely anarchistic is simplistic. In most projects, anarchy is balanced with hierarchical control.
It is in fact common for open source software development projects to be governed by a “benevolent dictator for life” (or BDFL). These are found in projects ranging from the Linux operating system kernel itself, of which Linus Torvalds is the BDFL, Linux-based operating system distributions such as Ubuntu led by Mark Shuttleworth, application software such as the Samba networking suite coordinated by Andrew Tridgell, and programming languages such as Perl, PHP and Python in which Larry Wall, Rasmus Lerdorf and Guido van Rossum respectively act as project leaders in perpetuity.
In the case of the Linux kernel, Torvalds who is perhaps the archetype of a BDFL, possesses ultimate authority to decide which contributions (“patches”) to the Linux operating system kernel should be accepted and which should be refused. Torvalds no longer personally manages the whole of the kernel and has delegated authority to a number of trusted associates to manage particular subsystems and hardware architectures, but it remains his authority to appoint these so-called “lieutenants” and to supervise their work. A document distributed with the Linux kernel source code that is subtitled “Care And Operation Of Your Linus Torvalds” describes him as “the final arbiter of all changes accepted into the Linux kernel.”
Thus contrary to what might be assumed from Raymond’s claim about “the Linux archive sites, who’d take submissions from anyone,” the Linux kernel development process is neither anarchistic nor consensual: if Torvalds does not like a patch, it does not go in to the kernel. This has often antagonised other kernel developers, one of them commencing a long-running thread on the kernel development mailing list by saying:
Linus doesn’t scale, and his current way of coping is to silently drop the vast majority of patches submitted to him onto the floor. Most of the time there is no judgement involved when this code gets dropped. Patches that fix compile errors get dropped. Code from subsystem maintainers that Linus himself designated gets dropped. A build of the tree now spits out numerous easily fixable warnings, when at one time it was warning-free. Finished code regularly goes unintegrated for months at a time, being repeatedly resynced and re-diffed against new trees until the code’s maintainer gets sick of it. This is extremely frustrating to developers, users, and vendors, and is burning out the maintainers. It is a huge source of unnecessary work. The situation needs to be resolved. Fast.
Torvalds’ initially unapologetic response recalls another classic example of his sardonic view of his position as BDFL, when announcing the selection of a penguin logo for Linux. Acknowledging the comments of those who had expressed reservations about it, Torvalds concluded with the quip, “If you still don’t like it, that’s ok: that’s why I’m boss. I simply know better than you do.”
The Mozilla and OpenOffice.org projects provide a slightly different example of hierarchical ordering in open source software development. In these cases, the authority is not that of an individual, but a corporation: originally Netscape Communications in the case of Mozilla, and Sun Microsystems in the case of OpenOffice.org.
This kind of collective hierarchical control over an open source software project can also be exercised by a civil society organisation. The non-profit Mozilla Foundation, for example, succeeded to the rights of Netscape, such as the trademark and rights under the Netscape Public License. Membership of its governing body (or “staff”) is by invitation only. Another example of such an organisation, also taken from one of the most prominent and successful open source projects, is the Apache Software Foundation (ASF), which is best known for the Apache HTTP Server which powers the majority of Web sites on the Internet.
The case of the ASF also illustrates well that there are also various strata of developers underneath the BDFL. One study has categorised these into core members (or maintainers), active developers, peripheral developers, bug reporters, readers and passive users, and confirmed previous findings that the core developers are generally the smallest group but write the majority of the project’s code. Whilst developers in lower strata are mostly self-selected, in many projects, including those of the ASF, the core developers are selected by the BDFL, applying stringent meritocratic standards.
In fact of the examples given of open source projects in which a significant hierarchical structure exists or has existed—the Linux kernel, Mozilla, OpenOffice.org and Apache, as well as Samba and Ubuntu mentioned earlier—all are the most widely-used open source projects in their class, and have large and active communities of developers. How can this be reconciled with the earlier hypothesis that it was the very lack of hierarchy that empowered developers and attracted them to volunteer their services to open source projects?
Despite the fact that its significance to developers had earlier been downplayed, the answer is found in the open source licence. It is the open source license that enforces benevolence upon the dictator. It does this by ensuring that for any open source project, there is always relatively costless freedom of exit, in that any developers who feel they are being oppressed by a project leader can simply cease participating in the project, take its source code, and use it as the base for a new project of their own (known as a “fork” of the original project). This “exit-based empowerment” enjoyed by developers mitigates the power of the project leaders.
As Torvalds has put it,
I am a dictator, but it’s the right kind of dictatorship. I can’t really do anything that screws people over. The benevolence is built in. I can’t be nasty. If my baser instincts took hold, they wouldn’t trust me, and they wouldn’t work with me anymore. I’m not so much a leader, I’m more of a shepherd.
The Linux kernel has, indeed, been forked numerous times. One prominent fork was that maintained by Red Hat Linux developer Alan Cox, who released a series of kernel source trees that contained patches not yet accepted by Torvalds. However since 2002, a technical solution to Torvalds’ backlog was found in the use of specialised revision control software, which has placated many of Torvalds’ critics, and resulted in the obsolescence of many former forks of the kernel.
Both Mozilla’s Firefox browser and the OpenOffice.org office suite have also been forked. The Debian project, for example, has replaced Firefox in its distribution with a forked version called Iceweasel, to escape the onerous trademark licence conditions imposed by the Mozilla Foundation for the use of the Firefox name and logo. As for OpenOffice.org, a prominent fork called NeoOffice has been customised to integrate more smoothly with the Mac OS X operating system. Debian itself has also spawned a number of derivative distributions, Ubuntu being one.
Admittedly, forking an open source project is not costless. Usually the most significant cost is that it will be necessary for the new project leader to establish a community of users and developers to support the project in the long term. For economic sociologists, this is the cost of developing social capital. Thus, the more successful the parent project is (and the more cohesive its communities of developer and users), the higher its social capital will be, the higher the transaction costs of a fork, and the more effectively that fork will have to differentiate itself from its parent in order to overcome those costs.
This is illustrated by the case of Samba-TNG which forked from the highly successful Samba project in 1999, seeking to differentiate itself by first offering the facility to replace a Microsoft Windows server as the Primary Domain Controller for an office network. However it struggled to build a development community comparable in size and expertise to that of its parent project, which in the meantime implemented its own version of Samba-TNG’s differentiating feature. In comparison, forks of less dominant and stable projects have been forked more often and more successfully.
This characteristic of the transaction costs associated with migration from one open source project to another provides a cohesive force against the unnecessary fragmentation of open source projects, that will only be overcome if enough developers become sufficiently dissatisfied to form a viable competing project (which the project leaders have an incentive not to allow to happen, lest they lose their base of developers). In comparison, developers within Microsoft Corporation face much higher transaction costs in replicating their work and their communities elsewhere if they are dissatisfied, if indeed it is possible for them to do so at all.
Thus it is from the unexpected source of the open source licence that a solution is found to the problem of maintaining an organisation under an hierarchical structure to address the limitations of anarchistic ordering, in that it provides an implicit ongoing consensual check on the power of the authority which side-steps the difficult task of objectively assessing the authority’s merit antecedently.
See Section 188.8.131.52.
That may be the practical effect of the prevailing hegemony of states in any case; that is, provided that a public policy issue is technically amenable to being addressed by rules, there would be nothing to stop governments or intergovernmental authorities from trumping the IGF’s recommendations even if the IGF were not structured in such a manner as to facilitate their doing so. The distinction though, formal as it may be, is between a multi-stakeholder governance forum structured to include a role for formal intergovernmental oversight, and one in which policy development is undertaken in the shadow of the exogenous power of states to intervene in and override the process.
It is still so known by many, notably including the Free Software Foundation; see http://www.fsf.org/.
Both appellations being encompassed by the acronym FOSS or F/OSS; FLOSS is also sometimes seen, adding the French libre.
Stallman, Richard M, The Free Software Definition (1998). A similar but more comprehensive list of ten requirements of open source software was first published by the Open Source Institute in 1998 in its Open Source Definition (see http://www.opensource.org/docs/osd).
See http://creativecommons.org/, though for criticism of the openness of the Creative Commons licences see Hill, Benjamin M, Towards a Standard of Freedom: Creative Commons and the Free Software Movement (2005).
According to blog analysis firm Technorati; see http://www.technorati.com/about/.
Ubuntu, founded in 2004 (see http://www.ubuntu.com/), is based on an earlier Linux distribution called Debian GNU/Linux, founded in 1993. The Debian project is the most egalitarian of the two; for example its elected Project Leader is directed by clause 5.3 of its constitution to “attempt to make decisions which are consistent with the consensus of the opinions of the Developers” and to “avoid overemphasizing their own point of view when making decisions in their capacity as Leader”: Debian Project, Debian Constitution (2006). In contrast, Mark Shuttleworth, who founded the Ubuntu distribution in 2004 and termed himself its SABDFL (self-appointed benevolent dictator for life), appoints the members of both of its main decision-making bodies (the Technical Board and the Ubuntu Community Council) and exercises a casting vote in those bodies.
A prominent former Debian Developer who resigned in 2006 compared the Debian and Ubuntu distributions by saying, “There’s a balance to be struck between organisational freedom and organisational effectiveness. I’m not convinced that Debian has that balance right as far as forming a working community goes. In that respect, Ubuntu’s an experiment—does a more rigid structure and a greater willingness to enforce certain social standards result in a more workable community?” (quoted in Byfield, Bruce, Maintainer’s Resignation Highlights Problems in Debian Project (2006), which links to the original source).
The position of BDFL normally falls to the developer who initiated a project, though in the case of multiple original core developers, the phenomenon of a benevolent oligarchy for life is not unknown (for example Matt Mullenweg and Ryan Boren for the WordPress blog engine at http://wordpress.com/).
See Documentation/SubmittingPatches within the kernel source tree which can be downloaded from http://www.kernel.org/.
For a more detailed case study of Linux kernel development see Schach, S, Jin, B, Wright, D, Heller, G, & Offut, A, Maintainability of the Linux Kernel (2002).
Originally published on Usenet at news:email@example.com.Helsinki.FI, now archived at http://groups.google.com/group/comp.os.linux.advocacy/msg/ee350cc97f7d0e69.
For more detailed case studies of these projects see Holck, Jesper & Jørgensen, Niels, Do Not Check In On Red: Control Meets Anarchy in Two Open Source Projects (2005) and Mockus, A, Fielding, R T, & Herbsleb, J D, Two Case Studies of Open Source Software Development: Apache and Mozilla (2002) for Mozilla, and Strba, Fridrich, From TrainedMonkey to Google SoC Mentor (2006) for OpenOffice.org.
As well as leading development, Netscape originally held the “Mozilla” trademark (as Linus Torvalds does for “Linux” in various jurisdictions: see http://www.linuxmark.org/), and until 2001 required modifications to its source code to be licensed under terms that exclusively exempted it from the copyleft provisions applicable to other users: see http://www.mozilla.org/MPL/FAQ.html in its description of the Netscape Public License.
Sun requires contributors to the OpenOffice.org project to assign joint copyright in their work to it: see http://www.openoffice.org/licenses/jca.pdf.
See http://www.apache.org/. The Apache Software Foundation is a non-profit corporation governed by a board of nine directors who are elected by the Foundation’s members for one-year terms, and who in turn appoint a number of officers (66, in 2008) to oversee its day-to-day operations. As of 2008 there are 249 members of the ASF, each of whom was invited to join on the basis of their previous contributions to ASF projects, and whose invitation was extended by a majority vote of the existing members.
For a more detailed case study of Apache see Mockus, A, Fielding, R T, & Herbsleb, J D, Two Case Studies of Open Source Software Development: Apache and Mozilla (2002).
Originally, ironically, a proprietary product called BitKeeper, and subsequently an open source equivalent called Git written by Torvalds himself: see http://git.or.cz/.
The same phenomenon is found in other open content development communities. For example in 2002, Spanish Wikipedians who were dissatisfied with the Wikipedia project created their own fork, Enciclopedia Libre (“free encyclopædia”), as permitted by the GNU Free Documentation License under which Wikipedia’s content is licensed: see http://enciclopedia.us.es/. More recently Larry Sanger has attempted to do the same, creating “a responsible, expert-managed fork of Wikipedia” titled Citizendium: see http://www.citizendium.org/.
Uphoff, N, Understanding Social Capital: Learning from the Analysis and Experience of Participation (1999). Social capital can be formally defined as “the value of those aspects of the social structure to actors, as resources that can be used by the actors to realize their interests”: Coleman, J, Foundations of Social Theory (1990) , 305.
For example, the oft-criticised PHP-Nuke content management system: see http://phpnuke.org/ and Corbet, Jonathan, PHP Nuke Remains Vulnerable (2001). These forks include Post-Nuke at http://www.postnuke.com/, Envolution at http://sourceforge.net/projects/envolution, MyPHPNuke at http://sourceforge.net/projects/myphpnuke and Xoops at http://www.xoops.org/.