This is another blog touting the final demise of the software giant, Microsoft. There have been many blogs of this sort and so far, none have come true. Microsoft is still a giant. Open Source hasn't killed Windows.
Considering that so much of our computer's run Windows, and that customers need Windows, it makes sense to examine the future of the Windows platform. I'll delve into the economy and provide a theory to Microsoft's demise.
Moore's law is the declaration that transistors will double within integrated circuits every year. Largely, this has been true historically, but hardware manufacturers have aimed for this. Because of this Microsoft Windows became a success, but now, due to saturation, Microsoft may collapse because of it.
The problem has two do with two aspects. One is due to a fault in that it was largely made true by marketing and hardware manufacturer (Wikipedia). The other has to do with that people only need so much power.
Hardware has shown a lot of adhesion the the principles of Moore's law and the computer industry boomed. Microsoft is a company that knows how to ride a booming industry, and not much else. Microsoft is a retail outfit: They buy software (I mean, they actually purchase the code, all six legal rights), stylize it, make it integrate (a real value, I seriously mean that), and then sell licenses. Well, they do produce a considerable amount of original code, but by and large, their model is purchase and resell.
Microsoft has depended on obsolescence because with each new computer requires new licenses. Also it has depended on rapid hardware development (Moore's Law), because operating systems need drivers to run the hardware, and many companies would only produce drivers for Microsoft Windows. Then comes Dell.
Because of the driver support and the available market for computer distribution, Windows offered a final piece to put on a computer to make a completed product--something that doesn't need anything else and works right out of the box. Dell comes along and becomes a big timer in the industry.
As I understand there is an S Curve which markets must adhere. Markets are limited since people need and want only so much from computers. I don't think I need a gaming rig just to check my email... an EeePC would do fine for that. Likewise the market is beginning to show signs of saturation.
In the old days when Microsoft began, computers were very clunky and required some expertise just to configure. Hardware could have IRQ conflicts, memory was very limited (the motherboard could only address so much) and expensive, computer cases would shred and ribbon your hands if you tried to work on them (trust me), CPUs where slow like molasses, and video cards produced only few colors and low resolutions. Basically, computers were there because they were needed and not because they were a whole lot of fun.
Now CPU speeds for a time improved according to Moore's law, but now they only change architecturally such as the change to multi-core and larger word sizes. Memory is getting very fast but only 4 gigs seems to be needed for desktops (and only if you either have huge or many applications open and running, or you're running Windows Vista). Many older technologies such as the clunky ATAPI and slow PCI bus are being dropped to the newer, faster and easier to configure buses (although PCI is pretty good). Peripheral technology like USB is hot swappable and really easy to handle. Nowadays, computers are very elegant machines. (A lot of the older technology such as ISA is still being used in a lot of modern motherboards--sensors connect to the ISA bus).
Microsoft tried to push hardware obsolescence with Vista. Such unnecessarily high hardware requirements have caused an improvement in the memory market, but Vista is failing.
Asus EeePC came into the market and with Linux, it was low cost and proved the low-end market viable. Dell recognized it and has now produce it's own low-end laptop. The market is begging to show signs of saturation in the computer world and now hardware distributors are going to see diminishing margins.
There's no need for 4 gigs of memory for most of what computers are expected to do these days. Most people where not enthused by the pretty desktop with the transparent windows and widgets. It's a flop, entirely. It doesn't sell.
Combine the Vista flop with hardware distributors seeking to save costs with smaller margins, Microsoft cannot pull the hardware market in the way they tried. The declining economy make this even worse. If Dell and other hardware distributors have to they will promote Linux and Open Source. That is, the saturating market will cause Microsoft to choke.
Microsoft is a large company that's based on the growth of the computer market. With diminishing margins, it makes less and less sense to package Windows because Linux is free, and runs better, even on slower hardware.
The future may seen a shift towards Open Source as companies like Dell may try to market it. IBM already does. Dell is beginning to sell laptops with Linux pre-installed. Cheaper computers with Linux are a hit. If Dell and other distributors continue to see smaller margins, they may take the plunge and sell Linux solutions instead of Microsoft solutions. This in turn will encourage the adoption of Open Source, and as Microsoft has already proved, all software needs to be dominate, is to get as many people to know and trust it as possible. The effect will then cascade and workers will be trained to use Open Source. They will see Linux computers available to purchase at a lower price and Microsoft will be brought down to size.
All those with interest in the computer industry should at least be prepared for a potential shift in the market. If I am right, a degree in .NET, among other things, would become useless.
Thursday, September 4, 2008
Wednesday, September 3, 2008
Windows and Linux Are Successful: A Quick Introduction Into Success Metrics
There has been some debate over the success or failures of Linux on the desktop, mainframe, and in the server market. Linux has finally started to gain some traction, especially in the server market. As for the success of Linux on the desktop, there are all sorts of blogs on the topic.
Some blogs say that Linux is not ready, and some say otherwise. Some arguments are old, some new, and some are relevant, and some aren't. There are gorgeous Linux desktops and then there is the command line. Although the Linux desktop becomes more and more user friendly, it hasn't taken hold of the market the same way OS X has, and certainly not how Windows XP has.
It seems that Windows is successful and Linux isn't. However, I say that this is a mistake. To say this uses install base as the metric for success is to be ignorant on the goals of both systems. They both have achieved the goals of their authors.
The goal of GNU is to free one from usage of proprietary software, and the goal of Windows is to make money. Windows is now so strong that Microsoft can discontinue a successful product to force everyone one to purchase a lesser one. Despite the strong shortcomings of Windows Vista and the sheer success of XP, Microsoft has the audacity to push Vista.
The strength of Microsoft is really in that they have enterprise installations. People might not try to learn a new operating system on their own, but for a job they will. Since Microsoft can sell their product to pretty much any business, workers will be trained for their product. This causes people to purchase Windows for their home computers, something neither Linux nor OS X enjoy.
Linux was developed by a bunch of programmers who wanted to make a free operating system. Generally, programmers are geeks, and will take time just to learn something new. When they talk about technology, what they really say is, "Look at how neat this is." That's why there's so much confusions between geeks and everyone else. Geeks don't realize that people don't value the same things they do, and those other people need to understand that they might actually know something, they don't. (In fact, I use the word 'geek' as a way to relate to those readers who don't know much about the Open Source communities, the real term to be used is 'hacker,' but most people understand, "Evil person who causes all sorts of chaos electronically.")
The geeks succeeded.
GNU/Linux plays music and video, downloads, uploads, plays video games and Windows applications (really, this the only place where an application written for entirely different platform will work), email, browsing, and anything else you'd want to do with a Windows system. One is not forced to use Windows anymore, unless Wine cannot run that one or two vital applications. It's initiative that's holding many people back still, "Oh, the learning curve!" (That's Linux's greatest weakness--it requires initiative).
Now Windows is successful in that it earns Microsoft tons of money, and there's plenty of room to enforce this statement to dismiss even a ridiculous doubt. Linux is free and works.
Usually, when someone talks about success, they usually the context implies which is winning, and that's Windows, although Linux is beginning to gain some traction with the Asus EeePC, Dell Open Source line, Cononical's Ubuntu, and all the little upshot niche markets selling computers preloaded with Linux (usually Ubuntu). So, Linux may kill Windows, it may dominate, and it may be catching up.
Both Windows and Linux are successful because they both have accomplished their respective goals. Windows is winning, although that might not be the case in the future. If one wants to decided which is successful, one must examine the goals because software isn't written just because the author wants to make money. To assume so is either naivety or ignorance.
Some blogs say that Linux is not ready, and some say otherwise. Some arguments are old, some new, and some are relevant, and some aren't. There are gorgeous Linux desktops and then there is the command line. Although the Linux desktop becomes more and more user friendly, it hasn't taken hold of the market the same way OS X has, and certainly not how Windows XP has.
It seems that Windows is successful and Linux isn't. However, I say that this is a mistake. To say this uses install base as the metric for success is to be ignorant on the goals of both systems. They both have achieved the goals of their authors.
The goal of GNU is to free one from usage of proprietary software, and the goal of Windows is to make money. Windows is now so strong that Microsoft can discontinue a successful product to force everyone one to purchase a lesser one. Despite the strong shortcomings of Windows Vista and the sheer success of XP, Microsoft has the audacity to push Vista.
The strength of Microsoft is really in that they have enterprise installations. People might not try to learn a new operating system on their own, but for a job they will. Since Microsoft can sell their product to pretty much any business, workers will be trained for their product. This causes people to purchase Windows for their home computers, something neither Linux nor OS X enjoy.
Linux was developed by a bunch of programmers who wanted to make a free operating system. Generally, programmers are geeks, and will take time just to learn something new. When they talk about technology, what they really say is, "Look at how neat this is." That's why there's so much confusions between geeks and everyone else. Geeks don't realize that people don't value the same things they do, and those other people need to understand that they might actually know something, they don't. (In fact, I use the word 'geek' as a way to relate to those readers who don't know much about the Open Source communities, the real term to be used is 'hacker,' but most people understand, "Evil person who causes all sorts of chaos electronically.")
The geeks succeeded.
GNU/Linux plays music and video, downloads, uploads, plays video games and Windows applications (really, this the only place where an application written for entirely different platform will work), email, browsing, and anything else you'd want to do with a Windows system. One is not forced to use Windows anymore, unless Wine cannot run that one or two vital applications. It's initiative that's holding many people back still, "Oh, the learning curve!" (That's Linux's greatest weakness--it requires initiative).
Now Windows is successful in that it earns Microsoft tons of money, and there's plenty of room to enforce this statement to dismiss even a ridiculous doubt. Linux is free and works.
Usually, when someone talks about success, they usually the context implies which is winning, and that's Windows, although Linux is beginning to gain some traction with the Asus EeePC, Dell Open Source line, Cononical's Ubuntu, and all the little upshot niche markets selling computers preloaded with Linux (usually Ubuntu). So, Linux may kill Windows, it may dominate, and it may be catching up.
Both Windows and Linux are successful because they both have accomplished their respective goals. Windows is winning, although that might not be the case in the future. If one wants to decided which is successful, one must examine the goals because software isn't written just because the author wants to make money. To assume so is either naivety or ignorance.
Tuesday, September 2, 2008
Open Source's Greatest Stength: Freely Available Information
Sometimes the greatest splendor of a particular technology isn't found in it's technical merit. All sorts of aspects to qualify as technical merit could include (even if only arguably): usability, security, stability, functionality, and so on. These are intrinsic qualities to technologies; that is, these are conditions the authors and engineers put into the technology that cause it to succeed in it's goal. To really understand Open Source's greatest strength, an introduction to the strength and weaknesses of software is presented here.
There are other elements that cause software to succeed or fail that go beyond the intrinsic qualities such as marketing, enthusiasm, previous dominance, third party hardware and software support and so on. Often times, perhaps more often than not, software succeeds regardless of its faulty intrinsic nature and visa versa. Arguments in favor of faulty software that has succeeded anyways usually silently admits the technical failures of the software, and visa versa (that is software that's so good it goes without a full arsenal of strengths and succeeds against all odds). That we have two areas that cause software to succeed or fail: intrinsic and extrinsic aspects.
Considering that all software is written ultimately by some human or team of humans, the nature of software is dictated by humans. This is important because it is humans that will determine the philosophy behind the software, and by that I mean the spirit of the software. It's the philosophy that predetermines the intrinsic and extrinsic strengths and weaknesses of software. So, it is that the reason why software is being developed as the key to the spirit of software.
Now, when software is written to earn money, it's strengths are usually external (which means it's usually technically inferior, but obtains and remains dominate). It usually enjoys well meaning and well earning (as in they have good jobs) professionals who insist on the superiority of a certain project. It's sad, but when someone spends $12,000 USD, they tend to scoff at free alternatives, and it's very tempting to say they only do so to save face. When software obtains dominance, it becomes de facto and now people must learn it to use computer systems and take advantage of the power of computers, that is unless they feel like doing tons of research themselves. This isn't the choice use of free time for most people.
Commercial software development also enjoys the use of money to really motivate people to get things done. All food takes an effort to produce, and almost always costs someone some money or some time. So, commercial software fills their bellies with warm food. So, commercial software gains in hard labor: marketing, engineering, research, and support. This really has been a blessing to commercial video games.
Open Source enjoys the enthusiasm. In this model volunteers who are often talented engineers and programmers decide to free themselves from the commercial model...
The commercial model tends to cause traps. Often times, software is written to shackle and dominate. Computers are then more expensive than necessary. Once dominance is achieved, a company can crush competition unfairly, and no software needs to improve. That is why open source exists, hence, The Free Software Foundation. (Also, why should someone with talent be forced to make someone else rich against his own benefit?)
In Open Source, the development model ensures high technical merit because the software usually doesn't enjoy marketing and professional-looking technical support. It also looks pathetic just because it's free. One of its strengths is that its license (usually General Public License v. 2) allows for free changes, scrutiny, and free usage and distribution.
I used to think that this didn't really provide much value for the common user. Usually, these are just people who want to just get a job done and didn't enjoy tinkering (as some call it) or finding a better way to do the same thing. However, I now rescind this motion. Free software makes this available to any user, whether or not he or she wants it so that he or she could become whatever he or she wants. The alternative is that one must go to college and seek a degree, then go to a business and climb the ladder. This is a real problem due to the fact that a lot of technically minded people have difficulty relating the complex details of software to non-technically minded people. These are the kinds of things mentioned in The Daily WTF and Computer Stupidities. These sites are humorous, and not necessarily scientifically valid studies, but do offer a view into what technically minded people face in the world of Information Technology. So, the advantage to open source is in it's unshackling to the commercial model, which benefits only a few, as opposed to the Open Source model, which benefits many.
While the spirit and technical merits really do wonders wonders for Open Source, it has only a few extrinsic strengths to boast of other than the openness of the code: primarily the availability of information. Although some open projects are poorly documented (I shake my fist at TinyERP) many are blessed with a all sorts of free support options. I enjoy IRC myself because I can get very quick and very substantial responses. The speed afford the timeliness needed for a good evaluation by someone much more knowledgeable who can critic an effort. Then there are forums, manuals, wikis, and so on. Often the spirit of commercial software doesn't allow for outsiders to create support communities and cannot peer into the code, cannot debug, cannot correct. When someone does make a community, it's to sell access to the community, or to offer third party support. So, in commercial software, nothing gets done without the promise of money. There are a lot of free communities for commercial software, but they just won't be as good just because the code is hands-off.
The free availability of information is a key component in my life. Because the software isn't just free, but the documentation is free, and often pretty complete, I enjoy the ability to study, learn and grow becoming a better technician. I've been given the ability to create routers, web servers, LDAP servers, do all sorts of graphics work, financial work, music work, networking, and on and on. I'm free to do what my heart pleases, and to commit myself to earning money in the process, even building businesses.
Some of my contemporaries enjoy this as well. Faith Computing (and the site I maintain: Faith Sites) was spawn from what Open Source made available. We use Joomla and SugarCRM (commercial open source is another subject entirely) and many other projects to earn a living. We can do all sorts of powerful things not just because the software stands strong on it's own technical merits, but due to the free availability of information.
Software has come a long way since Bell Labs made the first UNIX. Software was nearly stunted by the commercial process of software design. Openness unshackled it with one strong merit: the availability of information. Not only could I obtain the software, but I could maximize my use out of it. I'm given the ability to customize it, configure it, and deploy it for all it's worth. I can even do this to earn money. I wasn't much of a technician before I discovered open source, but now that I am, I give more thanks to the community for what I've become than all the books and institutionalized education I've enjoyed. That is to say, the availability of information about this software will ensure it's strength in the world of information technology despite the weaknesses of it's inability to provide marketing and against the tide of dominance.
There are other elements that cause software to succeed or fail that go beyond the intrinsic qualities such as marketing, enthusiasm, previous dominance, third party hardware and software support and so on. Often times, perhaps more often than not, software succeeds regardless of its faulty intrinsic nature and visa versa. Arguments in favor of faulty software that has succeeded anyways usually silently admits the technical failures of the software, and visa versa (that is software that's so good it goes without a full arsenal of strengths and succeeds against all odds). That we have two areas that cause software to succeed or fail: intrinsic and extrinsic aspects.
Considering that all software is written ultimately by some human or team of humans, the nature of software is dictated by humans. This is important because it is humans that will determine the philosophy behind the software, and by that I mean the spirit of the software. It's the philosophy that predetermines the intrinsic and extrinsic strengths and weaknesses of software. So, it is that the reason why software is being developed as the key to the spirit of software.
Now, when software is written to earn money, it's strengths are usually external (which means it's usually technically inferior, but obtains and remains dominate). It usually enjoys well meaning and well earning (as in they have good jobs) professionals who insist on the superiority of a certain project. It's sad, but when someone spends $12,000 USD, they tend to scoff at free alternatives, and it's very tempting to say they only do so to save face. When software obtains dominance, it becomes de facto and now people must learn it to use computer systems and take advantage of the power of computers, that is unless they feel like doing tons of research themselves. This isn't the choice use of free time for most people.
Commercial software development also enjoys the use of money to really motivate people to get things done. All food takes an effort to produce, and almost always costs someone some money or some time. So, commercial software fills their bellies with warm food. So, commercial software gains in hard labor: marketing, engineering, research, and support. This really has been a blessing to commercial video games.
Open Source enjoys the enthusiasm. In this model volunteers who are often talented engineers and programmers decide to free themselves from the commercial model...
The commercial model tends to cause traps. Often times, software is written to shackle and dominate. Computers are then more expensive than necessary. Once dominance is achieved, a company can crush competition unfairly, and no software needs to improve. That is why open source exists, hence, The Free Software Foundation. (Also, why should someone with talent be forced to make someone else rich against his own benefit?)
In Open Source, the development model ensures high technical merit because the software usually doesn't enjoy marketing and professional-looking technical support. It also looks pathetic just because it's free. One of its strengths is that its license (usually General Public License v. 2) allows for free changes, scrutiny, and free usage and distribution.
I used to think that this didn't really provide much value for the common user. Usually, these are just people who want to just get a job done and didn't enjoy tinkering (as some call it) or finding a better way to do the same thing. However, I now rescind this motion. Free software makes this available to any user, whether or not he or she wants it so that he or she could become whatever he or she wants. The alternative is that one must go to college and seek a degree, then go to a business and climb the ladder. This is a real problem due to the fact that a lot of technically minded people have difficulty relating the complex details of software to non-technically minded people. These are the kinds of things mentioned in The Daily WTF and Computer Stupidities. These sites are humorous, and not necessarily scientifically valid studies, but do offer a view into what technically minded people face in the world of Information Technology. So, the advantage to open source is in it's unshackling to the commercial model, which benefits only a few, as opposed to the Open Source model, which benefits many.
While the spirit and technical merits really do wonders wonders for Open Source, it has only a few extrinsic strengths to boast of other than the openness of the code: primarily the availability of information. Although some open projects are poorly documented (I shake my fist at TinyERP) many are blessed with a all sorts of free support options. I enjoy IRC myself because I can get very quick and very substantial responses. The speed afford the timeliness needed for a good evaluation by someone much more knowledgeable who can critic an effort. Then there are forums, manuals, wikis, and so on. Often the spirit of commercial software doesn't allow for outsiders to create support communities and cannot peer into the code, cannot debug, cannot correct. When someone does make a community, it's to sell access to the community, or to offer third party support. So, in commercial software, nothing gets done without the promise of money. There are a lot of free communities for commercial software, but they just won't be as good just because the code is hands-off.
The free availability of information is a key component in my life. Because the software isn't just free, but the documentation is free, and often pretty complete, I enjoy the ability to study, learn and grow becoming a better technician. I've been given the ability to create routers, web servers, LDAP servers, do all sorts of graphics work, financial work, music work, networking, and on and on. I'm free to do what my heart pleases, and to commit myself to earning money in the process, even building businesses.
Some of my contemporaries enjoy this as well. Faith Computing (and the site I maintain: Faith Sites) was spawn from what Open Source made available. We use Joomla and SugarCRM (commercial open source is another subject entirely) and many other projects to earn a living. We can do all sorts of powerful things not just because the software stands strong on it's own technical merits, but due to the free availability of information.
Software has come a long way since Bell Labs made the first UNIX. Software was nearly stunted by the commercial process of software design. Openness unshackled it with one strong merit: the availability of information. Not only could I obtain the software, but I could maximize my use out of it. I'm given the ability to customize it, configure it, and deploy it for all it's worth. I can even do this to earn money. I wasn't much of a technician before I discovered open source, but now that I am, I give more thanks to the community for what I've become than all the books and institutionalized education I've enjoyed. That is to say, the availability of information about this software will ensure it's strength in the world of information technology despite the weaknesses of it's inability to provide marketing and against the tide of dominance.
Monday, September 1, 2008
Linux and Hurricane Gustav: The Interdictor Project, HAM Radios and VoIP
Occasionally I'm given a privileged view into a clever project. Nowadays, technology can be put together very timely to produce something so useful. The end result is that I can listen to a broadcast of a piece of news I run across on HAM radio at my computer with no HAM radio capability.
Recently I was invited to participate in a project. Hurricane Gustav has been terrorizing the Southern coast and a project of information dissemination began. Several people combined HAM radio communications and Internet radio to make freely available information on hurricane Gustav who need it.
The Interdictor Project uses HAM radio to send VoIP data to various stations, and the signal is also converted to Internet radio format so people online may listen. I'd also imagine that received signals that are not sent by HAM radio are also converted to VoIP by the Interdictor station, but I'm also really fuzzy on the details.
Now this project recruits experience HAM radio operators to do the communication. I was asked to troll news sites for news on hurricane Gustav and mention them in an IRC channel when I come across something new. My idea was to add an RSS feed of Google News to one's favorite RSS client. That would allow for very comfortable and very quick notice of news events.
As I understand, when something new or an update is discovered, the HAM radio operators would then broadcast it and one could hear a duplicate on Internet radio. I'm not into HAM radios and so as I listen to the HAM radio communications on the Internet radio channel I peer a little into the HAM radio world.
Mostly there are just reports of wind and rain measurements at the various locations and time. Occasionally there is a call to clarify information, and a routine request to connect to certain monitor nodes if one is not in the hurricane Gustav area. The amazing thing is to see the clever use of technology for a noble task.
The picture is this: using RSS feeds, if I run across something important, I can mention it in a certain IRC channel. Then the HAM radio operators there will report it in their HAM radio channel which isn't analog, but VoIP. Then it's converted to Internet radio format and I can listen with amaroK to the HAM radio communications of the Interdictor effort to disseminate information hurricane Gustav, all with in the comfort of my own home.
Recently I was invited to participate in a project. Hurricane Gustav has been terrorizing the Southern coast and a project of information dissemination began. Several people combined HAM radio communications and Internet radio to make freely available information on hurricane Gustav who need it.
The Interdictor Project uses HAM radio to send VoIP data to various stations, and the signal is also converted to Internet radio format so people online may listen. I'd also imagine that received signals that are not sent by HAM radio are also converted to VoIP by the Interdictor station, but I'm also really fuzzy on the details.
Now this project recruits experience HAM radio operators to do the communication. I was asked to troll news sites for news on hurricane Gustav and mention them in an IRC channel when I come across something new. My idea was to add an RSS feed of Google News to one's favorite RSS client. That would allow for very comfortable and very quick notice of news events.
As I understand, when something new or an update is discovered, the HAM radio operators would then broadcast it and one could hear a duplicate on Internet radio. I'm not into HAM radios and so as I listen to the HAM radio communications on the Internet radio channel I peer a little into the HAM radio world.
Mostly there are just reports of wind and rain measurements at the various locations and time. Occasionally there is a call to clarify information, and a routine request to connect to certain monitor nodes if one is not in the hurricane Gustav area. The amazing thing is to see the clever use of technology for a noble task.
The picture is this: using RSS feeds, if I run across something important, I can mention it in a certain IRC channel. Then the HAM radio operators there will report it in their HAM radio channel which isn't analog, but VoIP. Then it's converted to Internet radio format and I can listen with amaroK to the HAM radio communications of the Interdictor effort to disseminate information hurricane Gustav, all with in the comfort of my own home.
Saturday, August 30, 2008
Playstation2 Controller for an MS-DOS Game on Linux
I had one of those situations where you suddenly realize that the circumstances where just totally unexpected. One by one the pieces come together as one innocently adds or changes something until something nobody expects occurs.
Descent was a video game developed by Interplay a long time ago when MS-DOS ruled computer gaming. Back then, joysticks had those connectors to those game ports sound cards had. Then Microsoft developed DirectX and then it was Windows 95 to reign. Since Windows 95 basically was a front-end for MS-DOS while providing a lot of the missing functionality from what one would expect from a full operating system (memory management, etc.) The point is that DOS games for the most part can run while Windows 95 ran because the Windows 95 environment was very compatible.
Linux 1.0, a UNIX clone was released. Being a UNIX it was a very different environment than MS-DOS and Windows 95 (98, ME, 2K, XP, or Vista for that matter). Written by hackers, Linux was rapidly developed and then adopted by GNU as their official kernel. However Linux didn't get to enjoy the aggressive marketing and support of Microsoft and all the hardware vendors. The hackers had to write all their own drivers.
Windows 95 was supported in that all hardware manufactures write drivers for Windows and Windows users enjoyed the ease of installing drivers for Windows to use their hardware, but the Linux community grew and continued to develop drivers, as well as software.
Occasionally a new program would pop up that allowed compatibility: Wine, dosemu, dosbox, and so on. Then there was USB and other types of hardware support. Then some clever hacker wrote d1x-rebirth.
So, I install d1x-rebirth. I plug in my PS2 controller into my Radio Shack USB dongle and into the USB slot there. I launch the game and have some fun before I realize:
I'm playing an old DOS game on a UNIX with a controller for a non-PC, and the game was never coded to use USB nor graphics acceleration either.
It was a blast. Way to go Open Source!
Descent was a video game developed by Interplay a long time ago when MS-DOS ruled computer gaming. Back then, joysticks had those connectors to those game ports sound cards had. Then Microsoft developed DirectX and then it was Windows 95 to reign. Since Windows 95 basically was a front-end for MS-DOS while providing a lot of the missing functionality from what one would expect from a full operating system (memory management, etc.) The point is that DOS games for the most part can run while Windows 95 ran because the Windows 95 environment was very compatible.
Linux 1.0, a UNIX clone was released. Being a UNIX it was a very different environment than MS-DOS and Windows 95 (98, ME, 2K, XP, or Vista for that matter). Written by hackers, Linux was rapidly developed and then adopted by GNU as their official kernel. However Linux didn't get to enjoy the aggressive marketing and support of Microsoft and all the hardware vendors. The hackers had to write all their own drivers.
Windows 95 was supported in that all hardware manufactures write drivers for Windows and Windows users enjoyed the ease of installing drivers for Windows to use their hardware, but the Linux community grew and continued to develop drivers, as well as software.
Occasionally a new program would pop up that allowed compatibility: Wine, dosemu, dosbox, and so on. Then there was USB and other types of hardware support. Then some clever hacker wrote d1x-rebirth.
So, I install d1x-rebirth. I plug in my PS2 controller into my Radio Shack USB dongle and into the USB slot there. I launch the game and have some fun before I realize:
I'm playing an old DOS game on a UNIX with a controller for a non-PC, and the game was never coded to use USB nor graphics acceleration either.
It was a blast. Way to go Open Source!
Labels:
d1x-rebirth,
descent,
hardware,
interplay,
linux,
MS-DOS,
open,
open source,
playstation 2,
radio shack,
source,
video game,
Windows
Subscribe to:
Posts (Atom)