|The User-Developer Convergence: Innovation and Software Systems Development in the Apache Project|
Software systems development can be understood as the effort undertaken to create software for computer-based systems. There are a number of approaches to developing software. What exactly software systems development actually is depends on the context. Whether developing software for personal use or enterprise systems, the core activity is is nevertheless the same: creating software for computer-based systems. The difference lies in scope, complexity, and context. Developing software used in landing aeroplanes requires a lot more accountability than developing software for automating trivial tasks. Developing an enterprise system involving numerous departments and employing scores of developers requires a whole other type of management technique than managing a small team of a handful developers. Different approaches have been devised to meet the different environments software is developed in and for.
Software systems development as a discipline has evolved since its infancy in the mid-20th century. In the early days of computing, software costs comprised a small percentage of overall computer-based system cost. Throughout the late 1960s and the 1970s software became an increasingly more important part of such systems. The application of computer-based systems grew. With growth came added functionality, and with added functionality complexity increased. Scientists and practitioners have tried to devise new ways to meet the challenges of building large-scale computer-based systems [PRESSMAN1996](Pressman 1996). Yet, "an increasing number of practitioners are beginning to realize that old approaches are not appropriate any more" [MILLER1992](Miller 1992, p. 93).
The first programmable computers were set to use in the mid-twentieth century. In the early stage of computing computers were used as large programmable calculators, set to solve complex mathematical problems. Hardware was initially the limiting factor, but the hardware was rapidly improving, though. With improved hardware the appliance of software grew, leading to increased complexity in software. Already in the early 1960s computer professionals warned about these problems. "A combination of increasing complexity of systems and the relative inexperience of systems development staff led to late deliveries of systems, escalating costs and failed software projects" [FRIEDMAN1989](Friedman 1989, p. 99). Another pertinent issue was the increased cost and problems with maintaining existing software systems. The limiting factor was shifting from software to hardware. Some came to call this shift of emphasis the software crisis.
By the late 1960s the problems cause by this shift were becoming apparent. The computer industry as a whole increased focus on how they developed software. A turning point came at the Garmisch Conference of 1968. Addressing these problems, the conference participants agreed that the solution would be to apply tried and tested methods of traditional engineering to software development. Calling this software engineering, they recommended that by standardizing the program structure through modularization, and to order the systems development process, their problems would be solved. "These recommendations would increase the observability and measurability of systems development and thereby increase management control" [FRIEDMAN1989] (Friedman 1989, p. 106).
In Fredrick Brooks [BROOKS1995](1975) argues that software engineering was not the silver bullet to solve the problems facing software systems developers. During the 1970s it was becoming apparent that software engineering had its limitations. There were still problems left to be solved. Different responses to this failure were suggested. One of them, as drawn up by Claudio Ciborra, explains the problem accordingly:
Information systems development often does not lead to a more effective and human organization … because systems designers do not apply structured methods or organization … [CIBORRA1987](Ciborra 1987, p. 2).
The answer to the problem is not Ciborra's but rather his recounting of the arguments of software engineering's proponents. For these the response to the short-comings was to continue the quest for a better methodology. During the late 1970s and early 1980s a new methodology emerged—object-orientation. With its roots in a programming paradigm from the early 1970s of collecting data and their operations into objects, practitioners saw the possibility of transferring the basic principles of object-orient programming into the domain of software engineering. By the 1990s this approach would be embraced by the computer industry as a whole as the silver bullet to solve all problems of software engineering. While it is apparent that object-orientation did help bridge the gap between systems modeling and realization and proved itself an invaluable tool for conceptualizing software systems, did it really address the underlying problems of classic software engineering or was it simply yet another methodology in a long line of succession?
As Miller observed in 1992, despite the efforts of the software engineering community "an increasing number of practitioners are beginning to realize that old approaches are not appropriate any more" [MILLER1992](Miller 1992, p. 93). Since the early 1970s, a group of researchers and practitioners have argued that neither software nor hardware are the limiting factors in software systems development. They argue the short-comings are human. Software engineering has eased issues like schedule slippage, bugs, and failure to stay within budget, but do they address the source of the problem? As early as the late 1960s and early 1970s software systems developers note the discrepancy between users requirements and software systems [BOGUSLAW1965](Boguslaw 1965) [MUMFORD1972](Mumford 1972). It is argued that software "systems has to be useful in terms of allowing users to do their jobs better" [FRIEDMAN1989](Friedman 1989, p. 175). Software constraints are no longer the sole limiting factor; user relations constraints are also a limitation. Not only computer professionals admitted human-computer interaction was poor, studies shows developers have problems understanding the users' needs and requirements for software systems.
For those subscribing to this new view of software systems development it was no longer a question of producing the software system right, but producing the right software system to meet its users' needs. This strain of software systems development is sometimes called the alternative approach. The alternative approach springs forth from as varied fields as linguistics, psychology, and sociology. Their only common denominator is that they seek alternative ways to those of software engineering. An implication of the shift of focus from producing the software systems right to producing the right software was a shift of attention towards the users' needs and requirements came into focus. With this shift came an appreciation that there were conflicting interests between participants in the software systems development process—especially users and management [BANSLER1989](Bansler 1989).
With its roots in technical communities of the 1960s, hacking has been a strain of software systems development running in parallel with software engineering for the past 30 odd years. Hacking is not to be confused with its malicious cousin, cracking, which aims at breaking into software systems. An entire mythology has been spun around hacking over the past three decades. The first written accounts on hacking is the Jargon File, an inside view of how the hacker community wants to view itself. The Jargon file was later published as The New Hacker's Dictionary [RAYMOND1998](Raymond 1998). The Jargon File was a collection of jargon from early technical communities in the United States, the first hacker communities. The Jargon File became, quite typical of the culture, a shared repository of technical slang being updated by a great number of people during its almost thirty year lifespan.
The hacker is "a person who enjoys exploring the details of programmable systems and how to stretch their capabilities" [RAYMOND1998](Raymond 1998, p. 233). The hack is "an incredibly good, and perhaps very time-consuming, piece of work that produces exactly what is needed" [RAYMOND1998](Raymond 1998, p. 231). Hacking is consequently the activity a hacker undertakes to produce a hack. Hackers like to see themselves as computer experts pushing the limits of technology. Writing about the early hackers by MIT's AI Lab, Steven Levy identified six principles he coined the hacker ethic, of which the the hands-on imperative, the principle that all information should be free, and to distrust authority and promote decentralization are those that rings through all literature on hacking.
At the core of hacking lies key values as openness, sharing, freedom and cooperation. Its practitioners considered hacking a community based approach to software systems development. As an approach to software systems development, hacking has produced viable systems in wide-spread use. Examples include the Unix operating system, the Internet standards, the GNU project, and more recently the Linux operating system and Apache Web server.