(C) 2002 Niall Douglas
This brochure aims to explain the Tornado idea, our vision of the future of computer operating systems and why with your help we can turn this into the next standard computer operating systems platform.
The idea began in 1995 when I noticed how the current design of computer operating systems severely limited a computer's usefulness and it occurred to me how a restructuring of it could vastly increase user productivity, decrease development times and greatly lower the total cost of ownership. This led to two failed attempts in constructing a prototype in 1996 and 1998 but for the third I saved up money, took nine months off work and made the concerted effort which resulted in the working prototype I have now today.
I have over sixteen years of programming experience across a large range of computer systems and more than six professionally in a variety of roles in many countries. I have worked successfully on solo and team projects, both as an engineer and team leader - indeed my business team aptitude test results indicate I am a multi-role capable team worker (or "plant" as the test called it).
In mine and respected others' opinion, this idea is capable of revolutionising the computer software industry and thus positioning us as the number one operating systems provider.
It is my intention, with your help, to create a number of pieces of computer software which can run on all three major operating systems of today: Microsoft Windows, GNU/Linux and Apple MacOS X. There will be a kernel which provides the extended functionality on top of the underlying operating system. There will also be a collection of components which modularly extend available functionality - the user may install additional components, or indeed write their own.
The product, Tornado, will look and function almost identically on all three platforms. Furthermore, they can combine and work together irrespectively on what operating system they are running upon eg; the product on Windows can work together with one on an Apple or indeed GNU/Linux.
The language used to communicate with the user is completely configurable and the product uses the unicode standard for text representation throughout. Furthermore, a full security model is provided which offers a superset of security functionality as provided by any of the three operating systems listed above.
I have a five year plan which should take us to at least the stage of a major international operating systems player with income exceeding costs in the third to fourth year. Our intended customer base shall begin with the computer enthusiast community across all the major platforms, moving onwards as an enabling solution for business' intranets and eventually as the standard solution framework for the majority of computer based applications worldwide.
There shall be three things which distinguish our product from that of the competition:
For the first two years, myself and one other programmer shall develop the core functionality, releasing free "beta" releases to the enthusiast community in order to (a) build the product's reputation (b) have a third-party software base built and (c) create a base of programmers capable of developing for our system.
With a small but capable base of available programmers and positive recommendations from technical advisors to companies, we shall be in a position to begin to offer enhanced copies for sale to business and users.
It is intended to always keep a cut-down version freely available to encourage new programmers to learn the technology and thus enter the pool of available engineers. To again further this goal, a considerable quantity of the source for the client libraries shall be provided for inspection and learning purposes.
As previously mentioned, for the first two years myself and one other programmer shall be most of the company. Administrative and legal affairs shall be performed for us by a contracted third-party administrator.
Entering the third year, we shall need one support person due to our increased interaction with paying customers. Their responsibilities shall include handling telephone queries and maintaining our web presence.
Thereafter, we shall increase our manpower dependent on needs and available funds. One strong option is the creation of a consulting division to advise companies on the successful implementation of our product.
It is intended that costs inclusive per employee for the first two years should not exceed 50,000 euro p/a (35,000 stlg p/a). Obviously after tax, social security and other costs this does not form an attractive remuneration package - however, I am confident that by offering shares in the company that we can successfully recruit the right kind of engineer. The major benefit is reduced costs for the implementation period and less chance of employee loss.
In the third year and thereafter, it being unlikely to find non-programmers willing to accept company shares, staffing costs shall rise considerably. However as we shall be beginning to receive orders at that time, we believe we shall still remain in the black.
Noticeably absent from the above are costs for sales and marketing. This is because I believe they shall be unnecessary for the first four years - however thereafter, we shall require one or two sales and marketing staff.
It is oft said that the grapevine is the best form of marketing and we intend to rely exclusively on the superior reputation of our product which we shall have carefully cultivated for the first three years.
We also shall not provide training facilities until at least year four or five (coinciding with the hiring of sales people) as we shall be depending on enthusiasts who train themselves for free using the tutorials we shall provide. Thereafter, especially as it is popular with business (and also very profitable), we shall run training courses in our technology.
There are three main weaknesses that I can foresee:
I recommend taking out at least four software patents within the US in order to improve our chances should US multinationals launch a full-scale assault upon us. History shows a typical remuneration of US$150 million per patent broken, so for an investment of US$20,000 each it is good value.
Product distribution for the first two years shall be done entirely by internet. Since our target customer base resides 99.9% on high-speed connections to the internet, we do not foresee this as a problem.
Installation package size (and thus average download times for the customer) will remain considerably below 50Mb. This by a 56.6k modem requires around three hours although few of our customers will use such a slow connection.
Advantages of the approach include very low costs for production and distribution as it is the customer who pays for most of it. In the third year and thereafter, it is planned to offer the product on a CD-ROM with manuals for an appropriate extra fee.
All figures below are given in Euros for north-western Europe around a typical large city.
First two years:
2x Programmers @ 35k p/a = 140,000
1x Office p/a = 24,000
4x US software patents ($20k each) = 60,000
Other associated costs = 16,000
Start-up costs = 10,000
Total = 250,000 for first two years
Thereafter per year:
3x Programmers @ 35k p/a = 105,000
1x Support & Web site maintainer @ 30k p/a = 30,000
1x Office p/a = 24,000
Other associated costs = 10,000
Total = 170,000 p/a
Employees would receive preferential shares in the company in addition to their payment.
7th November 2002
Contact details: firstname.lastname@example.org
10k sterling a year? 400 a month rent, 50 a month electricity/gas/rates, 50 a month ADSL, 200 a month food, 100 a month entertainment.
1000 (1600 euro) Qt licence, 4x 10k patents,
Total: 2x 10k + 1k + 4x 10k = 61k. Would need 35k p/a per additional
Old brochure (much more technical):
Operating systems and the programs they run for computers have gone through three main organisational generations:
Tornado takes the step to the next generation of software organisation: Data Component. We break down software into parts or components which do something to a certain type of data. This could be as simple as finding all the lines in a piece of text with a certain word in them, or converting a JPEG into a bitmap, or compressing data, or even permitting the user to modify the data in some fashion. To write a solution involves combining reusable processors of data together to achieve an end result.
Tornado takes its cue from previous generations and provides a system maintained repository of data processing components (hereafter referred to just as components). It does not matter how a component works or how it is controlled, all that is guaranteed is that it takes one kind of data and offers it as one or more other types. Hence it does not matter what API any component has, because it is required to have a certain minimum set which means it can be transparently exchanged for another without loss of basic functionality.
The way these components interact is via a data stream which merely a stream of typed data. Each component offers typed connection points with its outputs typed according to its inputs and between these inputs and outputs flow data streams. A data stream may travel between hard disc and memory, or between memory via a network to other memory, or any variation in between. The only difference between a network connection and memory is that the latter is much much faster.
Our vision of the fourth generation of computer software organisation permits massive reuse of code and data in a way not seen hitherto. Processing can automatically distribute itself across computational units whether between networked workstations or between processor clusters in a Non-Unified Memory Architecture (NUMA) computer of the future. Programming time, user time and maintenance time all decrease massively over current generation systems and as you shall see, the way we have implemented our vision adds countless other benefits over and above that already seen.
Hitherto we have seen many great ideas for improving operating systems come and go (the most recent being BeOS, previously NextSTEP). The major failing in these projects in our view was that although they could run on industry standard hardware, they could not run on industry standard software and thus they precluded convenience of use for the vast majority of would-be customers.
Tornado has not repeated this mistake! It has been written on a portability layer which abstracts it from the details of the host operating system so that Tornado can run equally well on Microsoft Windows, Linux and Apple MacOS X. Furthermore, any machine running Tornado can transparently work with any other machine running Tornado irrespective of host operating system.
Obviously, the Tornado environment fully integrates itself with the host OS environment, permitting interoperability of the two. We feel this solves the major fault of all previous failures.
We have furthermore improved greatly on the user interface, using a range of techniques garnered from psychological studies to improve use of intuition and deepen the learning curve to massively improve the productivity potential. With any operating system interface, there is a balance to be struck between ease of use and provision of power. In Tornado, we make it easy for a beginner to learn but we keep the learning curve going much longer - as the user continues to learn new methods of achieving their goal, so their productivity with the system increases. We expect users of Tornado will be more productive than with any other current system.
The last major plank upon which we are building our success is much improved functionality. Inbuilt in the system are such features as automatic versioning so a complete change history can be held for any piece of data, protection against data loss due to computer crashes, increased robustness and reliability, a database like filing system so that queries may be performed against it and excellent inbuilt contextualised help. Old annoyances such as directory navigation to save a file or installation and uninstallation of programs have been done away with. Components cooperate and glue together in a fashion currently unknown on popular operating systems. On a technical level, we feel Tornado is second to none.
We have demonstrated we have the theory and implementation on our side. But what about the eternal question of how to turn this idea into actual cash?
We foresee a five year plan. For the first two years, between one and three programmers shall work on providing core functionality to support the wide range of components required for maintaining a large userbase. During this time it is unlikely many customers would want to purchase such an advanced system, so we shall concentrate on building its reputation and casual use within the computer enthusiast community. It is our aim that they should prefer to use our system on their computers at home mostly for fun and because of the benefits provided, and they should as a result write new components for Tornado so that a software base is built.
In the third year, we begin to push Tornado to companies as an integrative solution which solves many of the problems currently unsolvable. Our supporters in the enthusiast community will provide assurances of its worth through their own personal experience and that heard on the grapevine. The fact it runs on so many existing systems will mean nothing is risked by installation and we are confident its superior user interface will accustomise users to it quickly and with a minimum of training.
In the fourth and fifth years we aim to be more flexible, configuring ourselves according to customer desires but all the time building ourselves to become the next predominant computer operating system. As we will have had a head-start, competitors will be left behind as we increase our domination of all new software development and the profits roll in.
It is intended throughout to keep a binary only version available for GNU Public Licence (GPL) use ie; users may write components for free if they make their own component available under the GPL. This as history has shown encourages people to try out the API and hence to become enamoured with how much better and more fun it is to program for our system instead of any other. This aspect will be crucial in winning new commercial software development to us which of course generates us revenue.
There is a possibility however in the third year that one or more of the large US multinational software companies may view us as a substantial threat and make an offer to buy us out. The most likely candidates will be first Microsoft, followed by IBM, then one of the larger Linux retailers. We anticipate that the longer we have without an offer, the greater that offer would be and furthermore aided by our US patents (see below), we should be able to realistically demand several hundred million dollars.
We thought a few case examples of how Tornado can solve traditionally unsolvable or difficult computational tasks easily would show why we expect Tornado to become the next de facto computing standard. Tornado's ability to link together any selection of tiny processing components to work together creates a level of power hitherto unknown, not just for the programmer but for the user.
For those of you familiar with the Unix command line, you will know that Unix has contained this kind of power for years. You could do things like zcat ~/www_logs/*.gz | grep "GET / .* \"-\"" | sort | /usr/local/bin/logresolve > ~/public_html/regusers.txt (decompress all compressed files in ~/www_logs, search for lines with "-", sort them alphabetically, resolve IP numbers in them to text and write the results into ~/public_html/regusers.txt) which combines four separate programs which know nothing about the others to produce output no one of them could (and indeed, would on non-Unix systems require writing a special program to achieve the same result - which could involve a lot of work). Tornado does the same for all programs on the desktop. Note that in the following examples, it is a user not a programmer who can do the following (a programmer can do much more again).
Say if you had a powerful computer with a MPEG4 format video on it and had a much older machine incapable of playing MPEG4 connected to it by a network. Currently to watch that video on the older machine, you would have to find a package capable of recompressing it into MPEG1 and of course use countless quantities of disc space for temporary files and the MPEG1 file itself which would be huge. In short, this is so inconvenient it isn't done.
With Tornado, you would plug the MPEG4 video file into a MPEG4 decompressor, that into a MPEG1 compressor, and then that into a MPEG1 viewer on the older machine. Whether processing is on the local machine or across many machines is transparent in Tornado. On the older machine, you literally would see no difference whether the file was a real MPEG1 or not.
Furthermore, it would not matter the slightest which of those three components you used - their manufacturers or where they are located (the MPEG4 and MPEG1 components could reside on a different machine yet again). The API transparency means any disparate combination of machines, programs and even CPU's is of no import.
Say you receive by email some document format you've never seen before. Currently, you would have to go hunting around the internet for a free viewer with the likely possibility there isn't one, so therefore you'd probably have to contact the sender and request the document in some other format you are able to read. It's even quite possible there is no alternative format, meaning you can't use the file or you must purchase a copy of the program just in order to view it!
With Tornado, this can't happen. Upon receiving some document, Tornado can transparently use a different machine with the necessary tools installed on it to convert the document into some other format you can use. Because only data moves between machines, the chances of getting viruses on your machine from infected downloads are negligible and the whole traditional problem of incompatible file formats is solved forever.
Say if you have some data which requires lengthy processing. Currently, you would have to write a custom piece of software which distributes worker modules out across a network of computers and yet more software to coordinate them all together to work coherently. A good existing example of this is the SETI project whose screensavers are the distributed worker modules - however, currently you must rewrite a custom solution for every problem of this kind unless you want to pay big bucks.
With Tornado, distributed processing is inherent in the system. Everything, even on your local computer, is distributively processed. Whether it remains in your local computer or shared across one hundred computers is of no consequence. This is also greatly relevant to future computer designs - we are currently making the transition from uniprocessor to SMP multiprocessor. The next evolution will be from SMP to NUMA and Tornado is already built to work excellently on NUMA multiprocessing architectures. We are confident this software will grow exponentially in usefulness without alteration for the next twenty years.
4th November 2002
Contact details: email@example.com
Free Software type companies: