So yeah, the Xbox 360 is pretty uninteresting, apart from the memory limitation. Fortunately, the third and final thing you can program with XNA is rather interesting: the Zune. The Zune is Microsoft's answer to the iPod: a portable music player. However, the Zune is also capable of running third-party programs through XNA. This makes it rather interesting, as it gives you the best taste of what it was like to program a console in the past (back before console CPUs were in the GHz and memory was in the hundreds of megs), as it has some of the same programming considerations (or at least moreso than the 360 or PC).
There are a few different models of Zune with different capabilities, which can be separated into what I'll call the 'Zune' (the first several models, which are pretty much identical for our purposes), and the recent Zune HD.
The Zune series is shown above. Members of the series vary in exact design, but all have a few common features. All models have a low-power ARM CPU, 64 megs of RAM, and has the ability to do basic bitmap graphics. They contain a 240x320 LCD screen in 3:4 aspect ratio (the screen size varies from 1.8 to 3.2 inches, diagonal), running at 30 FPS (some high-end models also support 60 FPS). For input, each has a circular pad as well as one button to each side of the pad.
These features give the Zune a unique set of programming considerations. Obviously, the CPU is drastically less powerful than on the PC or Xbox 360. Games have a quota of 16 megs of RAM usage, in which both code and data must fit (although as this is only 1/4 of the total RAM of the Zune, garbage collection overhead is much less of a problem, and you should be able to use all 16 megs without seeing too much garbage collection slowdown). Graphics are limited to 2D sprite-based operations, and screen space is very limited (the Zune actually has a quite impressive display resolution for its screen size, but it's still tiny). Of course, the fact that this is a portable system means that battery life is also an issue. But perhaps the most tricky issue is the input system.
The Zune is designed to be used standing up, as shown in the above picture. In this orientation, the screen is taller than it is wide. Control can use both thumbs, with one thumb on one button, and the other thumb shared between the circular pad and the second button. However, as games typically are designed around a screen that is wider than it is tall, this configuration comes off as somewhat unnatural, though some games are more appropriate for this than others (e.g. a top-down shooter would have no problems, here).
Alternately, the Zune may be turned on its side, yielding the standard 4:3 aspect ratio used on everything prior to high-definition televisions and wide-screen monitors. The chief problem with this configuration, then, becomes control. Due to the button configuration of the Zune, this configuration allows only one thumb for input, shared between the circular pad and the two buttons. This has the effect of drastically reducing the potential complexity of gameplay input, as it's far more cumbersome to switch between buttons than to simply have a second thumb take care of one of them.
The Zune HD is shown above. You could really call this the Zune 2 (or, if you're Nintendo, the Super Zune), as it's almost entirely different from the previously mentioned Zune series. It's powered by an nVidia Tegra (2600), a system on a chip which both acts as an (ARM) CPU and a GPU capable of 3D graphics (although thus far XNA does not support 3D on the Zune HD). As far as I know, the amount of memory on the HD has not been published, though it's safe to say it has at least 64 megs RAM. While the screen is only 3.3 inches (about the same as the higher-end Zunes), the screen is now 480x272 (when laid sideways) in the HDTV 16:9 aspect ratio.
Perhaps most significant, however, and the reason the Zune is much more interesting than the PC and Xbox 360, is the change in input system. As can be seen in the image, the Zune HD has no buttons or other controls. Instead, the HD features two new input methods: a multi-touch display and an accelerometer.
For those not familiar with it, multi-touch displays are a type of touch-screen, which take as input a point on the screen and the pressure exerted on the screen at that point. Multi-touch takes this to the next level by allowing multiple points of contact, including tracking of movement of each point. This allows for very flexible and powerful input, permitting such interfaces as "point and click" via pressing the screen at a point, dragging and dropping, and things such as gestures. This even allows interfaces such as the one seen in Minority Report (and other sci-fi-ish depictions), where multiple points of contact with the touch screen can be used to grab and move, rotate, or resize items on screen (this type of interface exists in the Microsoft Surface and other multi-touch devices; also, be sure to check out the Dungeons and Dragons on Surface demonstration).
You might not be able to guess what an accelerometer is, based only on the name: obviously it measures acceleration, but unless you're into physics, you probably wouldn't make the connection with gravity. Technically, holding an object in air (as opposed to letting it free-fall) requires an upward force to be applied to the object, and a force produces acceleration. An accelerometer measures this acceleration against the force of gravity; in other words, an accelerometer measures which direction is up, based on the orientation of the device. Of course, it also detects other types of acceleration, such as movement in space, as well. Between the possibilities, this allows a number of interesting (and impossible, with traditional input methods) input systems, such a basing the in-game camera on position and/or orientation of the Zune, actions triggered by bumping or shaking, etc.
Microsoft makes the multi-touch screen and accelerometer available in XNA through the XNA Zune Extensions addon. Once you've downloaded that, search the help for Zune HD Input Overview for general information about support. After that, zunezune.org has posted a helpful simple "game" that demonstrates multi-touch and accelerometer: the code is available here, and a video of the program in action is below. Finally, Platformer: Adding Touch Support (also in Extensions help) is a tutorial that shows you how to add multi-touch and accelerometer support to the platformer starter kit.
Search This Blog
Monday, December 28, 2009
Wednesday, December 23, 2009
& Modern Things Part 2
So, after looking a bit at several (very) old game systems, how about we look at a couple of the new ones. Though in reality, the XBox 360 is actually pretty boring; which is to say that it's more or less a modern computer.
The 360's Xenon CPU is a pretty typical incarnation of the PowerPC line used in older Macs, and is related to the Playstation 3's Cell CPU (although the Cell has a very unusual architecture resembling a cluster on a chip, and differs quite a bit from other PowerPC chips - or most CPUs, for that matter). The PowerPC line, the low-end portion of the larger Power line, are RISC processors with simple instructions limited primarily to operations on registers, in contrast to the CISC x86 and the CPUs of the 2600, NES, and SNES, which tend to use many instructions that operate on memory data.
The Xenon is composed of three symmetric 64-bit cores with a shared L2 cache. Each core executes two threads simultaneously, and contains (among the expected things) a SIMD vector unit for significant math performance (although if you're using XNA you won't have access to the vector unit). The only remotely noteworthy part of the CPU is the fact that unlike some other PowerPC varieties (and all Intel CPUs for quite a while, now), execution is in-order, meaning that it must pause execution of a thread when a slow I/O operation is required; the assumption then is that the number of threads executing at a time (2 per core) will reduce the impact of individual stalls.
The Xenos GPU is also fairly uninteresting. It's a custom ATI 3D GPU designed specifically for the XBox 360 and optimized for console games, though a lot of it resembles common PC GPUs of the same vintage. It supports the DirectX 9 Shader Model 3, although it contains some custom extensions that provide some of the features new in DirectX 10 Shader Model 4 (though the details may differ), such as the unified shader architecture. It also has dedicated hardware to provide 4x anti-aliasing for "free" (as opposed to the performance penalty that normally occurs with anti-aliasing) and optimized z-only rendering. Finally, after all the fancy rendering is done, the 360 supports several (television) output resolutions from 640x480 (standard TV) to 1920x1080 (highest HD widescreen).
But the most noteworthy parts are the ones we haven't seen before (at least in this series of posts).
Some versions of the 360 come with a hard drive of varying sizes. In addition to saved games (which can also be stored on memory cards or internal flash memory, on models without the hard drive), this drive is used for game caching (hard drives are faster than DVDs) and optional downloadable content. It's also used to store the XBox compatibility software that allows the 360 to emulate games for the original XBox.
Finally, all XBox 360s come with the ability to connect to the internet (wired ethernet ports are standard, with a wireless addon), especially for the purpose of connecting to the XBox Live service. Live is a social platform that covers a wide range of services (although some require a paid subscription to Live), including friends lists and communication; multiplayer matchmaking and play; game achievements that allow your friends to see your gaming accomplishments; voice and video chat with friends; downloadable bonus game content; the Live Marketplace, where you can purchase and download addons and entire games (including XNA games) and other content (e.g. movies); and several major third-party web services such as Netflix streaming movies and Last.fm steaming music.
So, that's the hardware and the platform. But what's it like to program? Well, thanks to the surreal veil of secrecy surrounding consoles in general, that much isn't really common knowledge, and I'm not entirely sure. Development is in C or C++, probably with the Intel C++ compiler. The 360 uses a custom operating system (so they say) that supports at least some approximation of the Windows API and DirectX; while the OS does not use the same driver system Windows normally uses, the CPU is probably the only piece of hardware in the system programmers are supposed to directly access, with other hardware abstracted through the Windows or DirectX APIs or some such. Given this, if Microsoft is smart, they made it as similar to programming on Windows as possible, so that developers can transition from the PC to the 360 with minimal education. Though one thing that will definitely have to differ is the compiler intrinsics, for things such as vector math (perhaps the same intrinsics that were used on the PowerPC Mac) and multithreading-related things (e.g. memory barriers; remember those?).
The situation is different if you're using XNA. In this case programming the 360 is almost identical to programming the PC via XNA. Programming is done in C#, and run on the .NET compact framework. The runtime libraries consist of a subset of the .NET class library and the additional features supplied by the XNA class library. No hardware is directly accessible; the CPU and memory are hidden behind the .NET framework, and graphics and sound hardware can only be accessed through the XNA class library (as far as I know you can't directly access DirectX through XNA, at least on the 360).
But regardless of how you program it, perhaps the most noteworthy difference between programming a PC and the 360 is the memory limitation. While on PCs it's always been cheapest to just make your users buy more memory, on consoles it's frequently the case that you have to actually spend development manpower shaving off memory usage to make your game fit in the console's memory (at least for large, complex games). The 360 has 512 megs of memory. While this may not sound so bad at first, you have to realize that this is common memory, shared by both the CPU and the graphics system (though at least the OS probably takes up drastically less memory on the 360 than on the PC). Compared to PC games that typically take north of a gig of main memory and 512 megs video memory, 512 megs starts to look pretty small (for the curious, the Playstation 3 is comparable: 256 megs main memory and 256 megs graphics memory).
This is especially true in the case of XNA. As stated previously, thanks to garbage collection, you can only use 30-40% of the total system memory before you start seeing a substantial decrease in available processing power due to garbage collection; on the 360 this comes out to something like 64-128 megs, depending on how much memory is used for graphics. Fortunately, there's a way to deal with this penalty: avoid garbage collection entirely. Garbage collection is triggered when a memory allocation fails due to there not being enough unallocated memory to perform the allocation; the framework then performs garbage collection to look for memory that was allocated but is no longer being used, and can be freed to make room for the new allocation.
In other words, if you can avoid allocating memory during gameplay, you can prevent garbage collection (you could also manually cause the framework to do garbage collection at times which are convenient, such as during loading or pausing); this is optimization 101: the fastest code is the code that isn't executed. Use structs, which are allocated on the stack or within the containing memory structure, rather than classes, which are allocated out of the heap; use allocation-minimizing algorithms and data structures, such as an open-addressing hash table (e.g. Dictionary), where the hash table is an array of entries, rather than an array of linked lists or a binary tree (e.g. SortedDictionary), which must allocate memory for each entry; use reasonable reserve sizes for structures so the structure isn't likely to need to be reallocated during gameplay; use free lists as much as possible; use specific enumerators rather than IEnumerator; etc. - anything that can significantly reduce the need to allocate memory.
The 360's Xenon CPU is a pretty typical incarnation of the PowerPC line used in older Macs, and is related to the Playstation 3's Cell CPU (although the Cell has a very unusual architecture resembling a cluster on a chip, and differs quite a bit from other PowerPC chips - or most CPUs, for that matter). The PowerPC line, the low-end portion of the larger Power line, are RISC processors with simple instructions limited primarily to operations on registers, in contrast to the CISC x86 and the CPUs of the 2600, NES, and SNES, which tend to use many instructions that operate on memory data.
The Xenon is composed of three symmetric 64-bit cores with a shared L2 cache. Each core executes two threads simultaneously, and contains (among the expected things) a SIMD vector unit for significant math performance (although if you're using XNA you won't have access to the vector unit). The only remotely noteworthy part of the CPU is the fact that unlike some other PowerPC varieties (and all Intel CPUs for quite a while, now), execution is in-order, meaning that it must pause execution of a thread when a slow I/O operation is required; the assumption then is that the number of threads executing at a time (2 per core) will reduce the impact of individual stalls.
The Xenos GPU is also fairly uninteresting. It's a custom ATI 3D GPU designed specifically for the XBox 360 and optimized for console games, though a lot of it resembles common PC GPUs of the same vintage. It supports the DirectX 9 Shader Model 3, although it contains some custom extensions that provide some of the features new in DirectX 10 Shader Model 4 (though the details may differ), such as the unified shader architecture. It also has dedicated hardware to provide 4x anti-aliasing for "free" (as opposed to the performance penalty that normally occurs with anti-aliasing) and optimized z-only rendering. Finally, after all the fancy rendering is done, the 360 supports several (television) output resolutions from 640x480 (standard TV) to 1920x1080 (highest HD widescreen).
But the most noteworthy parts are the ones we haven't seen before (at least in this series of posts).
Some versions of the 360 come with a hard drive of varying sizes. In addition to saved games (which can also be stored on memory cards or internal flash memory, on models without the hard drive), this drive is used for game caching (hard drives are faster than DVDs) and optional downloadable content. It's also used to store the XBox compatibility software that allows the 360 to emulate games for the original XBox.
Finally, all XBox 360s come with the ability to connect to the internet (wired ethernet ports are standard, with a wireless addon), especially for the purpose of connecting to the XBox Live service. Live is a social platform that covers a wide range of services (although some require a paid subscription to Live), including friends lists and communication; multiplayer matchmaking and play; game achievements that allow your friends to see your gaming accomplishments; voice and video chat with friends; downloadable bonus game content; the Live Marketplace, where you can purchase and download addons and entire games (including XNA games) and other content (e.g. movies); and several major third-party web services such as Netflix streaming movies and Last.fm steaming music.
So, that's the hardware and the platform. But what's it like to program? Well, thanks to the surreal veil of secrecy surrounding consoles in general, that much isn't really common knowledge, and I'm not entirely sure. Development is in C or C++, probably with the Intel C++ compiler. The 360 uses a custom operating system (so they say) that supports at least some approximation of the Windows API and DirectX; while the OS does not use the same driver system Windows normally uses, the CPU is probably the only piece of hardware in the system programmers are supposed to directly access, with other hardware abstracted through the Windows or DirectX APIs or some such. Given this, if Microsoft is smart, they made it as similar to programming on Windows as possible, so that developers can transition from the PC to the 360 with minimal education. Though one thing that will definitely have to differ is the compiler intrinsics, for things such as vector math (perhaps the same intrinsics that were used on the PowerPC Mac) and multithreading-related things (e.g. memory barriers; remember those?).
The situation is different if you're using XNA. In this case programming the 360 is almost identical to programming the PC via XNA. Programming is done in C#, and run on the .NET compact framework. The runtime libraries consist of a subset of the .NET class library and the additional features supplied by the XNA class library. No hardware is directly accessible; the CPU and memory are hidden behind the .NET framework, and graphics and sound hardware can only be accessed through the XNA class library (as far as I know you can't directly access DirectX through XNA, at least on the 360).
But regardless of how you program it, perhaps the most noteworthy difference between programming a PC and the 360 is the memory limitation. While on PCs it's always been cheapest to just make your users buy more memory, on consoles it's frequently the case that you have to actually spend development manpower shaving off memory usage to make your game fit in the console's memory (at least for large, complex games). The 360 has 512 megs of memory. While this may not sound so bad at first, you have to realize that this is common memory, shared by both the CPU and the graphics system (though at least the OS probably takes up drastically less memory on the 360 than on the PC). Compared to PC games that typically take north of a gig of main memory and 512 megs video memory, 512 megs starts to look pretty small (for the curious, the Playstation 3 is comparable: 256 megs main memory and 256 megs graphics memory).
This is especially true in the case of XNA. As stated previously, thanks to garbage collection, you can only use 30-40% of the total system memory before you start seeing a substantial decrease in available processing power due to garbage collection; on the 360 this comes out to something like 64-128 megs, depending on how much memory is used for graphics. Fortunately, there's a way to deal with this penalty: avoid garbage collection entirely. Garbage collection is triggered when a memory allocation fails due to there not being enough unallocated memory to perform the allocation; the framework then performs garbage collection to look for memory that was allocated but is no longer being used, and can be freed to make room for the new allocation.
In other words, if you can avoid allocating memory during gameplay, you can prevent garbage collection (you could also manually cause the framework to do garbage collection at times which are convenient, such as during loading or pausing); this is optimization 101: the fastest code is the code that isn't executed. Use structs, which are allocated on the stack or within the containing memory structure, rather than classes, which are allocated out of the heap; use allocation-minimizing algorithms and data structures, such as an open-addressing hash table (e.g. Dictionary), where the hash table is an array of entries, rather than an array of linked lists or a binary tree (e.g. SortedDictionary), which must allocate memory for each entry; use reasonable reserve sizes for structures so the structure isn't likely to need to be reallocated during gameplay; use free lists as much as possible; use specific enumerators rather than IEnumerator; etc. - anything that can significantly reduce the need to allocate memory.
Friday, December 18, 2009
& Modern Things Part 1
One thing that has always kind of mystified me is the degree of inaccessibility maintained about video game console development. For almost every single video game console in history, getting access to the manufacturer's developer documentation and hardware requires that you be licensed (and it sucks to be you if the manufacturer doesn't like you) and have a large sum of money for the development kits; for most systems, the accessibility of the system to lone hobbyists has been 0.
Of course, eventually the system, if there's enough interest, is reverse-engineered, and unofficial documentation is put up on the internet (and perhaps followed by law suits). But such documentation is usually incomplete, inaccurate, or just plain badly written, and there's almost always anti-piracy lockout mechanisms that prevent you from actually running any code you manage to write on the system itself.
This is a strikingly odd decision, and seemingly self-destructive. Many software companies turn a blind eye to piracy among college students because they realize that students pirating their programs in college, when they couldn't afford to buy the programs, anyway, makes them more likely to use them professionally, when they do have money to purchase the programs. Some companies even give away their programs to students for educational purposes, for the very same reason: more student users that get the program for free means more buyers after graduation. Economically, it's a simple business investment, trading theoretical income now, which they couldn't collect anyway, for greater actual income later.
This situation on consoles is no different. If you promote hobbyist development on your game console, that increases the probability that those hobbyists will develop on your console when they go pro. This is especially applicable considering that the modern game console business model is to sell the consoles at a loss, then make up that loss in licensing of games; thus, every additional game developed for your system equals more money for you. Ultimately, I suspect the reason for this odd behavior lies in the lockout mechanism: in order to allow hobbyists to develop for the console, the console cannot have a lockout mechanism, which would make it much easier to pirate games.
Anyway, there are only a couple exceptions to this rule, that I know of. Sony published a mini developer kit for the Playstation, called the Net Yaroze, for about a grand, which included a black development version of the system (lacking the lockout hardware), various documentation (I don't know if this was the same or inferior to the professional documentation), some run-time libraries (probably inferior to the professional kits) and software to get custom code onto the Yaroze. Of course, anything you write could only be run by people with their own Yaroze, due to the lockout system.
Very surprisingly, only in the last couple years has anyone actually targeted non-professionals as serious developers for their system. Of course I'm talking about XNA (official site). XNA is a free game programming framework built on top of the .NET platform. Games are written in C# in Visual C#, and make use of the .NET and XNA class libraries. The .NET libraries (a subset, to be precise) provide general support code such as data structures and multi-threading, and the XNA libraries cover hardware access, such as graphics and sound, and various support routines too game-specific for the .NET libraries, such as quaternions and interpolated curves. Use of the .NET framework also provides things such as garbage collection, that make programming easier and faster; the use of non-native code and APIs also means that, if code is written carefully, a single game can be compiled and run on all three platforms XNA supports: Windows, Xbox 360, and the Zune (the fine print: development on the 360 requires a Creators Club Premium subscription; membership is available to anyone, but costs $100/year).
Unlike the completely unrelated, professional Xbox 360 Software Developer Kit, XNA is targeted specifically at hobbyists and independent developers (e.g. a person who writes a small game and wants to sell it for cheap). Microsoft runs an active community site containing Microsoft-written code samples, tutorials, and starter kits (e.g. a 2D RPG and a 3D racing game), and forums where both hobbyists and XNA developers post. There are also third-party XNA tutorial/code sites, though Ziggyware, the largest and best, imploded a ways back (humorously, according to Google, Ziggyware and a few others had posted the presentation I wrote on kd-trees as part of my graphics term project, prior to the site going under).
Once a game is finished, in addition to distributing the source and/or binaries manually, finished games can be submitted to several places, depending on platform. Free Windows games may be submitted to the XNA community website for download. Finished games can also be submitted to the Xbox Live Marketplace, where they can be downloaded, either for free or for sale (whatever you decide on), by anyone who has an Xbox 360 (a Creators Club subscription is not required to play games published on the Marketplace). To say that again: you can sell your amateur Xbox games for cash through Live Marketplace, right next to professional downloadable games.
Of course, this isn't without its caveats. XNA doesn't allow you to directly access the hardware, or use the same low-level API professional developers use from a natively-executed (after compiling) language like C++. This means that XNA games cannot compete in sheer speed and power with professionally-developed games. It also means that you may not be able to access 100% of the features of the hardware, where XNA only exposes a portion of the feature set; one very notable omission is the ability to use the Xbox 360's vector math unit, a fact that drastically reduces the performance potential for some types of calculations.
Second is the fact that it's a .NET platform. While this choice was good from an ease of programming perspective (garbage collection is easier to program and less prone to various coding errors, especially given that most of the people coding on XNA will be amateurs and thus may not be very good), it's not so good from a performance perspective. Perhaps the most stereotypical problem for garbage-collected systems like Java or the .NET platform is that garbage collection is far from free, both in terms of CPU and memory. If you plot the proportion of time the CPU spends collecting garbage (instead of, you know, actually running the program) against memory usage, you'll find that after a certain threshold cost grows exponentially. One study found that, when memory utilization is at 20%, garbage collection is no more expensive (and maybe even faster) than explicit allocation/deletion, but costs rise quickly from there: garbage collection is 17% slower at 33% memory usage, 70% slower at 50% memory usage, and as you approach 100% memory utilization garbage collection approaches 100% of total CPU time.
A third major point of concern is that the XNA framework on the Xbox 360 does not have the full .NET framework/compiler, but only the Compact Framework. This version is designed for hardware that doesn't have a lot of memory or processing power (especially things like PDAs), and offers reduced memory and CPU overhead at the price of sub-optimal execution speed. Of the various optimizations performed by the full-fledged .NET framework, only a subset are performed by the compact framework; for example, inlining is restricted, virtual function calls are implemented in a more compact but slower manner, and both the framework and garbage collection algorithm are just dumber in general, to name a few issues (see here for a more lengthy list).
However, making decisions is easy when there's only one option, and if you're a hobbyist wanting to code for a modern video game console, that option is XNA. Even if you're not interested in consoles, XNA still provides a convenient and free game development platform, designed specifically with hobbyists in mind. It also opens the possibility of making a bit of money off your amateur games prior to going pro.
Of course, eventually the system, if there's enough interest, is reverse-engineered, and unofficial documentation is put up on the internet (and perhaps followed by law suits). But such documentation is usually incomplete, inaccurate, or just plain badly written, and there's almost always anti-piracy lockout mechanisms that prevent you from actually running any code you manage to write on the system itself.
This is a strikingly odd decision, and seemingly self-destructive. Many software companies turn a blind eye to piracy among college students because they realize that students pirating their programs in college, when they couldn't afford to buy the programs, anyway, makes them more likely to use them professionally, when they do have money to purchase the programs. Some companies even give away their programs to students for educational purposes, for the very same reason: more student users that get the program for free means more buyers after graduation. Economically, it's a simple business investment, trading theoretical income now, which they couldn't collect anyway, for greater actual income later.
This situation on consoles is no different. If you promote hobbyist development on your game console, that increases the probability that those hobbyists will develop on your console when they go pro. This is especially applicable considering that the modern game console business model is to sell the consoles at a loss, then make up that loss in licensing of games; thus, every additional game developed for your system equals more money for you. Ultimately, I suspect the reason for this odd behavior lies in the lockout mechanism: in order to allow hobbyists to develop for the console, the console cannot have a lockout mechanism, which would make it much easier to pirate games.
Anyway, there are only a couple exceptions to this rule, that I know of. Sony published a mini developer kit for the Playstation, called the Net Yaroze, for about a grand, which included a black development version of the system (lacking the lockout hardware), various documentation (I don't know if this was the same or inferior to the professional documentation), some run-time libraries (probably inferior to the professional kits) and software to get custom code onto the Yaroze. Of course, anything you write could only be run by people with their own Yaroze, due to the lockout system.
Very surprisingly, only in the last couple years has anyone actually targeted non-professionals as serious developers for their system. Of course I'm talking about XNA (official site). XNA is a free game programming framework built on top of the .NET platform. Games are written in C# in Visual C#, and make use of the .NET and XNA class libraries. The .NET libraries (a subset, to be precise) provide general support code such as data structures and multi-threading, and the XNA libraries cover hardware access, such as graphics and sound, and various support routines too game-specific for the .NET libraries, such as quaternions and interpolated curves. Use of the .NET framework also provides things such as garbage collection, that make programming easier and faster; the use of non-native code and APIs also means that, if code is written carefully, a single game can be compiled and run on all three platforms XNA supports: Windows, Xbox 360, and the Zune (the fine print: development on the 360 requires a Creators Club Premium subscription; membership is available to anyone, but costs $100/year).
Unlike the completely unrelated, professional Xbox 360 Software Developer Kit, XNA is targeted specifically at hobbyists and independent developers (e.g. a person who writes a small game and wants to sell it for cheap). Microsoft runs an active community site containing Microsoft-written code samples, tutorials, and starter kits (e.g. a 2D RPG and a 3D racing game), and forums where both hobbyists and XNA developers post. There are also third-party XNA tutorial/code sites, though Ziggyware, the largest and best, imploded a ways back (humorously, according to Google, Ziggyware and a few others had posted the presentation I wrote on kd-trees as part of my graphics term project, prior to the site going under).
Once a game is finished, in addition to distributing the source and/or binaries manually, finished games can be submitted to several places, depending on platform. Free Windows games may be submitted to the XNA community website for download. Finished games can also be submitted to the Xbox Live Marketplace, where they can be downloaded, either for free or for sale (whatever you decide on), by anyone who has an Xbox 360 (a Creators Club subscription is not required to play games published on the Marketplace). To say that again: you can sell your amateur Xbox games for cash through Live Marketplace, right next to professional downloadable games.
Of course, this isn't without its caveats. XNA doesn't allow you to directly access the hardware, or use the same low-level API professional developers use from a natively-executed (after compiling) language like C++. This means that XNA games cannot compete in sheer speed and power with professionally-developed games. It also means that you may not be able to access 100% of the features of the hardware, where XNA only exposes a portion of the feature set; one very notable omission is the ability to use the Xbox 360's vector math unit, a fact that drastically reduces the performance potential for some types of calculations.
Second is the fact that it's a .NET platform. While this choice was good from an ease of programming perspective (garbage collection is easier to program and less prone to various coding errors, especially given that most of the people coding on XNA will be amateurs and thus may not be very good), it's not so good from a performance perspective. Perhaps the most stereotypical problem for garbage-collected systems like Java or the .NET platform is that garbage collection is far from free, both in terms of CPU and memory. If you plot the proportion of time the CPU spends collecting garbage (instead of, you know, actually running the program) against memory usage, you'll find that after a certain threshold cost grows exponentially. One study found that, when memory utilization is at 20%, garbage collection is no more expensive (and maybe even faster) than explicit allocation/deletion, but costs rise quickly from there: garbage collection is 17% slower at 33% memory usage, 70% slower at 50% memory usage, and as you approach 100% memory utilization garbage collection approaches 100% of total CPU time.
A third major point of concern is that the XNA framework on the Xbox 360 does not have the full .NET framework/compiler, but only the Compact Framework. This version is designed for hardware that doesn't have a lot of memory or processing power (especially things like PDAs), and offers reduced memory and CPU overhead at the price of sub-optimal execution speed. Of the various optimizations performed by the full-fledged .NET framework, only a subset are performed by the compact framework; for example, inlining is restricted, virtual function calls are implemented in a more compact but slower manner, and both the framework and garbage collection algorithm are just dumber in general, to name a few issues (see here for a more lengthy list).
However, making decisions is easy when there's only one option, and if you're a hobbyist wanting to code for a modern video game console, that option is XNA. Even if you're not interested in consoles, XNA still provides a convenient and free game development platform, designed specifically with hobbyists in mind. It also opens the possibility of making a bit of money off your amateur games prior to going pro.
Tuesday, December 15, 2009
& Moral Panics
One topic that came up rather suddenly in IRC is the topic of the irrationality of humans when a person feels wronged. The particular topic in chat was that, I'm told, you should never, ever touch leaked materials, such as the Windows source or COFEE, because this tends to send companies (especially the one that produced said thing) into moral panics and refuse to ever hire you.
Think about this for a moment; a little bit of rational thought concludes that this is highly irrational behavior, reminiscent of the Pointy-Haired Boss (is there a Dilbert strip on this topic, I wonder?). If you were Microsoft, for instance, and you were looking to hire a programmer for the Windows team (although this could also apply to other parts as well), the #1 most desirable candidate for you is the one who has played extensively with the leaked Windows source, all other things being equal. Not only would categorically refusing to hire such a person result in no benefit, but it would materially harm you as a company, by refusing the candidate most beneficial to you. This is a case where moral outrage contradicts reason, and acting on that outrage results in self-destructive behavior that does more harm than good; or, as the saying goes, cutting off your nose to spite your face.
An alternate form of this is observed extensively in the copyright industries, who have a long history of various licensing and technology blunders with a detrimental effect to their own sales in the name of fighting piracy (and the goal of fighting piracy is, you know, to increase sales). In this case the moral outrage is provoked by a fixation on the amount of piracy; this is a fundamentally flawed measurement. The entire purpose of business is to maximize profit, and that is concerned (usually) solely with sales: reducing piracy (if you can even manage that) is of no benefit if doing so does not produce a net increase in sales at the same time; whatever the exact number of pirated copies may be is entirely irrelevant. And if you haven't managed to boost sales in the process, you're all the worse off because you're already out the money you spent trying to fight piracy.
(for those wondering, yes, the term "moral panics" is from Patry)
Think about this for a moment; a little bit of rational thought concludes that this is highly irrational behavior, reminiscent of the Pointy-Haired Boss (is there a Dilbert strip on this topic, I wonder?). If you were Microsoft, for instance, and you were looking to hire a programmer for the Windows team (although this could also apply to other parts as well), the #1 most desirable candidate for you is the one who has played extensively with the leaked Windows source, all other things being equal. Not only would categorically refusing to hire such a person result in no benefit, but it would materially harm you as a company, by refusing the candidate most beneficial to you. This is a case where moral outrage contradicts reason, and acting on that outrage results in self-destructive behavior that does more harm than good; or, as the saying goes, cutting off your nose to spite your face.
An alternate form of this is observed extensively in the copyright industries, who have a long history of various licensing and technology blunders with a detrimental effect to their own sales in the name of fighting piracy (and the goal of fighting piracy is, you know, to increase sales). In this case the moral outrage is provoked by a fixation on the amount of piracy; this is a fundamentally flawed measurement. The entire purpose of business is to maximize profit, and that is concerned (usually) solely with sales: reducing piracy (if you can even manage that) is of no benefit if doing so does not produce a net increase in sales at the same time; whatever the exact number of pirated copies may be is entirely irrelevant. And if you haven't managed to boost sales in the process, you're all the worse off because you're already out the money you spent trying to fight piracy.
(for those wondering, yes, the term "moral panics" is from Patry)
Sunday, December 13, 2009
& Even Older Things
On Friday I went to a free, open presentation at University of California Irvine. Normally I wouldn't even mention such a thing on the blog, but it just so happened that a significant part of of presentation was about the hardware of the Atari 2600, a game console launched in 1977, 6 years before the Nintendo (NES). As this fits right in with a series of posts on this blog, I figured I might as well write about it.
Interestingly, the 2600 used almost the same CPU (the 6507) as the NES (6502) and Commodore 64 computer (6510). According to Wikipedia, the 6507 was a smaller version of the NES's 6502, while the 6510 was an expanded version of the 6502 . The 6502 supported 64 KB of address space (16-bit addressing), while the 6507 supported 8 KB (13-bit addressing); the 6502 also supported (external) hardware interrupts (the vertical blank interrupt, in the case of the NES), while the 6507 did not.
In contrast to the NES's 2 KB and SNES's 128 KB, the 2600 had an amazing 128 bytes of RAM (although more could be added on cartridges, for a manufacturing price). Early 2600 games came in 2 KB ROMs, although the system supported up to 32 KB with bank switching (4 KB at a time); in comparison, NES games ranged from (I think) 32 KB to 768 KB (also with bank switching: 32 KB at a time), although in theory it could support more.
But by far the most "interesting" thing about the system was the graphics system. Unlike the NES, SNES, and pretty much all consoles and computers made in the last three decades, the 2600 had no video RAM to speak of. Instead of storing on-screen image data in video memory which is then composed by the graphics chip and output to the display, on the 2600 the video was drawn actively, one line at a time; by "actively" I mean that the game had to compose each line as it was drawn by the television. Each line, 160 pixels wide, was composed of 24 two-color background blocks, 2 eight-bit single-color bitmap sprites, 2 single-color line "missiles" whose colors mirror those of the sprites, and a line "ball" that was much like the missiles, but was the color of the background.
However, while the hardware was very (very) limited, the ability (or necessity, in this case) to change the screen contents while drawing was underway provided some flexibility for the clever programmer (just like with the NES and SNES). Clever use of the missiles and ball, changing each scan line, allowed for more complex graphics than you'd imagine given the hardware capabilities. By changing the sprite configuration you could have more than two sprites, though having more than two sprites on the same scan line required alternating between them each frame, resulting in flicker. Alternating palettes allowed the system to display 4 different colors on each line (out of a total of 128), as well as multicolored sprites and backgrounds. Clever use of the missiles and ball allowed for additional sprites per line, or multi-color and non-block backgrounds. Developers even found that they could expand the background to a full 48 blocks (the background is 48 blocks wide, but the background is only 24 bits, describing the left half of the screen; the right side is formed by repeating or mirroring the left side) by modifying the background registers halfway through the line.
Finally, the 2600 had sound analogous but inferior to the NES's. It had two channels of sound, one generating a square wave of varying pitch, the other white noise. In comparison, the NES had five channels: 2 square wave (used to approximate most melodic instruments in music), one triangle wave (often for bass or low-frequency strings), a noise channel (used for drums and other percussion), and a waveform channel (I'm not familiar with any instances of this being used by games).
Pac Man, showing off more than two sprites and 48-block-wide backgrounds:
Pitfall, showing off multi-colored sprites and backgrounds, as well as (possibly) non-block backgrounds:
Interestingly, the 2600 used almost the same CPU (the 6507) as the NES (6502) and Commodore 64 computer (6510). According to Wikipedia, the 6507 was a smaller version of the NES's 6502, while the 6510 was an expanded version of the 6502 . The 6502 supported 64 KB of address space (16-bit addressing), while the 6507 supported 8 KB (13-bit addressing); the 6502 also supported (external) hardware interrupts (the vertical blank interrupt, in the case of the NES), while the 6507 did not.
In contrast to the NES's 2 KB and SNES's 128 KB, the 2600 had an amazing 128 bytes of RAM (although more could be added on cartridges, for a manufacturing price). Early 2600 games came in 2 KB ROMs, although the system supported up to 32 KB with bank switching (4 KB at a time); in comparison, NES games ranged from (I think) 32 KB to 768 KB (also with bank switching: 32 KB at a time), although in theory it could support more.
But by far the most "interesting" thing about the system was the graphics system. Unlike the NES, SNES, and pretty much all consoles and computers made in the last three decades, the 2600 had no video RAM to speak of. Instead of storing on-screen image data in video memory which is then composed by the graphics chip and output to the display, on the 2600 the video was drawn actively, one line at a time; by "actively" I mean that the game had to compose each line as it was drawn by the television. Each line, 160 pixels wide, was composed of 24 two-color background blocks, 2 eight-bit single-color bitmap sprites, 2 single-color line "missiles" whose colors mirror those of the sprites, and a line "ball" that was much like the missiles, but was the color of the background.
However, while the hardware was very (very) limited, the ability (or necessity, in this case) to change the screen contents while drawing was underway provided some flexibility for the clever programmer (just like with the NES and SNES). Clever use of the missiles and ball, changing each scan line, allowed for more complex graphics than you'd imagine given the hardware capabilities. By changing the sprite configuration you could have more than two sprites, though having more than two sprites on the same scan line required alternating between them each frame, resulting in flicker. Alternating palettes allowed the system to display 4 different colors on each line (out of a total of 128), as well as multicolored sprites and backgrounds. Clever use of the missiles and ball allowed for additional sprites per line, or multi-color and non-block backgrounds. Developers even found that they could expand the background to a full 48 blocks (the background is 48 blocks wide, but the background is only 24 bits, describing the left half of the screen; the right side is formed by repeating or mirroring the left side) by modifying the background registers halfway through the line.
Finally, the 2600 had sound analogous but inferior to the NES's. It had two channels of sound, one generating a square wave of varying pitch, the other white noise. In comparison, the NES had five channels: 2 square wave (used to approximate most melodic instruments in music), one triangle wave (often for bass or low-frequency strings), a noise channel (used for drums and other percussion), and a waveform channel (I'm not familiar with any instances of this being used by games).
Pac Man, showing off more than two sprites and 48-block-wide backgrounds:
Pitfall, showing off multi-colored sprites and backgrounds, as well as (possibly) non-block backgrounds:
Friday, December 04, 2009
Random Fact of the Day
Exact Audio Copy, like some similar programs (I know older versions of Nero are like this) can be made to work on a Windows limited user account, but it requires a couple of things.
First, you have to make the DAT files in the EAC folder writable by all users (or at least those you want to be able to use it).
Second, you have to enable low-level access to the disk drive for limited users. The easiest and least dangerous (in terms of downloading software from who knows where) method of doing this is to simply use Nero BurnRights (search down on the page), and it will allow you to set this option; note that you don't need to actually have Nero to use this program. Of course you'll have to install it and run it as admin, but once you set the option you'll be able to use EAC (and other similar programs) from any user.
Finally, make sure EAC is set to use the Native Win32 Interface (EAC Options->Interface); this should be the default, but who knows.
First, you have to make the DAT files in the EAC folder writable by all users (or at least those you want to be able to use it).
Second, you have to enable low-level access to the disk drive for limited users. The easiest and least dangerous (in terms of downloading software from who knows where) method of doing this is to simply use Nero BurnRights (search down on the page), and it will allow you to set this option; note that you don't need to actually have Nero to use this program. Of course you'll have to install it and run it as admin, but once you set the option you'll be able to use EAC (and other similar programs) from any user.
Finally, make sure EAC is set to use the Native Win32 Interface (EAC Options->Interface); this should be the default, but who knows.
Thursday, December 03, 2009
& Almost as Old Things
Recently I've had the whim to play through Final Fantasy 6 (3 in the US) again, and have been doing so on an (SNES) emulator. I'm currently about to start the last level, but that's beside the point of this post. While playing, I thought I'd take a look at the SNES hardware and write a blog post about it, given that I'd already looked at the NES hardware - though not anything half so extensive as what I did with the NES, just a look at the basic hardware. What I found actually surprised me. As it turns out, the SNES really is the Super NES; that is, it has very similar hardware, only better.
First of all, the CPU is a generational upgrade to the NES' CPU, bigger and better. The CPU is now 16-bit (as opposed to the NES' 8-bit CPU), but has essentially the same instruction set (with some augmentations). The CPU is still CISC, sporting the same three general registers (an accumulator and two index registers) and operating on an accumulator register using the contents of memory. However, the CPU now has much greater flexibility in memory access, with a 24-bit pointer register and the ability to access memory relative to the stack pointer*. It also now has multiply and divide instructions, as well as a few other things. But deep down, it's the very same instruction set architecture, and in fact has an 8-bit compatibility mode that lets it directly execute code from the original NES CPU.
*Both of these features were conspicuous absent in the NES, as I believe I noted. As the NES only had 8-bit registers, it had no way to hold pointers (which were 16 bits) in registers; to make use of pointers the NES had an indirect addressing mode where the CPU would write a pointer to memory 8 bits at a time, and then had an instruction to load/store a value through that pointer (think "mov reg, [memory]"). While the NES did have a stack, it had only push and pop instructions, and lacked the ability to access data relative to the stack pointer, preventing use of the stack to pass parameters or store local variables; consequently, parameters and local variables were assigned fixed memory addresses, and the stack was rarely used.
The situation is somewhat similar in the graphics system. While the graphics chip is drastically more powerful than the NES', it's based on the same concept of background layers and sprites, all drawn from a (larger) bank of 8x8 tiles. The SNES supports twice as many sprites as the NES (and a lot more per line) and sprites can be much larger (up to 64x64, compared to 8x16 on the NES), with 16 colors each (compared to 4, including transparent, on the NES); but perhaps the most interesting improvement is that there are now 4 background layers, and they can be combined via various raster operations in many interesting ways.
To illustrate this, take a look at this picture: a typical shot from a battle. The SNES supports 8 different graphics modes. For the most part the difference between the 8 is the number of layers and how many colors each layer supports (presumably this is due to it being too expensive to put enough video memory in to allow full color from all layers at once). In this scene I'm guessing that it's using mode 3 ("3 layers, two using 16-color palettes and one using 4-color palettes"), based on the number of layers I can see plus the number of colors.
To show off the graphic abilities of the SNES, next is a screen shot of the special effects from casting of a spell, which we're going to dissect.
This is layer 1. You can see the bubble from the spell here, sporting transparency that tints other layers. Interestingly, you can see part of a screen (the magic selection screen) that isn't open at all; in the composite, this section is covered up by the bottom part of layer 2. In other words, in the top part of the screen layer 1 is drawn on top of all other layers, while the bottom part is drawn at the bottom of the layers; this goes to illustrate what I said about very complicated and flexible interaction between the layers (it's possible this involves changing the layer parameters in between lines, a technique I mentioned being possible on the NES).
Layer 2 is the background for the whole screen, the top section the battlefield, the bottom section the menu system. Note that a sine wave offset pattern has been applied to the battlefield background; while I haven't investigated it to be certain, I suspect this is accomplished by simply modifying the screen scroll position in between drawing each line.
Layer 3, again serving an array of purposes, is used for special effects and the text on the menus. You can't see it clearly at all from just this layer, but this layer is used to produce those discolored blotches on the spell bubble. This may be another case of transparency for the special effect, but it's hard to tell from screen shots alone.
To illustrate this fact, layers 1 and 3 together:
And finally, the least interesting part: the sprite layer. Although here again we see something rather unexpected: it looks like there's some type of sprite garbage that is normally (again) covered by the menu.
So, that's basically what 7 of the 8 graphics modes are all about. The 8th one, known as Mode 7, however, is a bit different. It has only a single large 256-color background layer, but it has the ability to apply a transformation matrix to the background, allowing scaling and rotation. This is used very commonly in SNES games, especially with the swap-the-registers-between-lines trick, allowing it to do primitive 3D perspective projection. Believe it or not, that minimap is actually a sprite (as is some of the glow in the background).
One particularly interesting (in the sense of peculiar) feature of the SNES design is the sound system. As opposed to most sound systems, the SNES system does not consist of the CPU simply writing channel parameters such as sample #, frequency, etc. to the sound chip which then performs the requested operation. Rather, the SNES has a separate CPU (the SPU) which acts as a sound coprocessor: sound programs are written, assembled, and then executed on this coprocessor; once the program has been uploaded, the sound system can play (e.g. music) without any further involvement of the main CPU at all (I proved this a decade ago by showing that shorting between two pins on the SNES would crash the main CPU while the SPU continued to play the music without missing a beat), though obviously the main CPU must instruct the SPU when it's time to play dynamic sound effects.
We can only guess why the SNES was designed this way. The most obvious possibility is that this frees the main CPU from having to deal with music and sounds effects, leaving it more cycles to spend on something else (especially in the case where one or more channels of the music must be temporarily dropped to allow a sound effect to be played).
An alternate possibility that I haven't been able to confirm is that this is done to increase the resolution of the audio system. In Blaster Master, the game would perform all the calculations for the frame, then spin waiting for the vertical blank interrupt to begin work on the next frame. Thus code executed 60 times a second, apparently including audio code. If this is true in the general case, that limits the resolution of audio operations to 1/60 second as well. In contrast, the SPU runs at 1 MHz independent of the main CPU, allowing it to issue commands to the sound generator at any time, in theory allowing for higher music tempo and more complex audial effects.
First of all, the CPU is a generational upgrade to the NES' CPU, bigger and better. The CPU is now 16-bit (as opposed to the NES' 8-bit CPU), but has essentially the same instruction set (with some augmentations). The CPU is still CISC, sporting the same three general registers (an accumulator and two index registers) and operating on an accumulator register using the contents of memory. However, the CPU now has much greater flexibility in memory access, with a 24-bit pointer register and the ability to access memory relative to the stack pointer*. It also now has multiply and divide instructions, as well as a few other things. But deep down, it's the very same instruction set architecture, and in fact has an 8-bit compatibility mode that lets it directly execute code from the original NES CPU.
*Both of these features were conspicuous absent in the NES, as I believe I noted. As the NES only had 8-bit registers, it had no way to hold pointers (which were 16 bits) in registers; to make use of pointers the NES had an indirect addressing mode where the CPU would write a pointer to memory 8 bits at a time, and then had an instruction to load/store a value through that pointer (think "mov reg, [memory]"). While the NES did have a stack, it had only push and pop instructions, and lacked the ability to access data relative to the stack pointer, preventing use of the stack to pass parameters or store local variables; consequently, parameters and local variables were assigned fixed memory addresses, and the stack was rarely used.
The situation is somewhat similar in the graphics system. While the graphics chip is drastically more powerful than the NES', it's based on the same concept of background layers and sprites, all drawn from a (larger) bank of 8x8 tiles. The SNES supports twice as many sprites as the NES (and a lot more per line) and sprites can be much larger (up to 64x64, compared to 8x16 on the NES), with 16 colors each (compared to 4, including transparent, on the NES); but perhaps the most interesting improvement is that there are now 4 background layers, and they can be combined via various raster operations in many interesting ways.
To illustrate this, take a look at this picture: a typical shot from a battle. The SNES supports 8 different graphics modes. For the most part the difference between the 8 is the number of layers and how many colors each layer supports (presumably this is due to it being too expensive to put enough video memory in to allow full color from all layers at once). In this scene I'm guessing that it's using mode 3 ("3 layers, two using 16-color palettes and one using 4-color palettes"), based on the number of layers I can see plus the number of colors.
To show off the graphic abilities of the SNES, next is a screen shot of the special effects from casting of a spell, which we're going to dissect.
This is layer 1. You can see the bubble from the spell here, sporting transparency that tints other layers. Interestingly, you can see part of a screen (the magic selection screen) that isn't open at all; in the composite, this section is covered up by the bottom part of layer 2. In other words, in the top part of the screen layer 1 is drawn on top of all other layers, while the bottom part is drawn at the bottom of the layers; this goes to illustrate what I said about very complicated and flexible interaction between the layers (it's possible this involves changing the layer parameters in between lines, a technique I mentioned being possible on the NES).
Layer 2 is the background for the whole screen, the top section the battlefield, the bottom section the menu system. Note that a sine wave offset pattern has been applied to the battlefield background; while I haven't investigated it to be certain, I suspect this is accomplished by simply modifying the screen scroll position in between drawing each line.
Layer 3, again serving an array of purposes, is used for special effects and the text on the menus. You can't see it clearly at all from just this layer, but this layer is used to produce those discolored blotches on the spell bubble. This may be another case of transparency for the special effect, but it's hard to tell from screen shots alone.
To illustrate this fact, layers 1 and 3 together:
And finally, the least interesting part: the sprite layer. Although here again we see something rather unexpected: it looks like there's some type of sprite garbage that is normally (again) covered by the menu.
So, that's basically what 7 of the 8 graphics modes are all about. The 8th one, known as Mode 7, however, is a bit different. It has only a single large 256-color background layer, but it has the ability to apply a transformation matrix to the background, allowing scaling and rotation. This is used very commonly in SNES games, especially with the swap-the-registers-between-lines trick, allowing it to do primitive 3D perspective projection. Believe it or not, that minimap is actually a sprite (as is some of the glow in the background).
One particularly interesting (in the sense of peculiar) feature of the SNES design is the sound system. As opposed to most sound systems, the SNES system does not consist of the CPU simply writing channel parameters such as sample #, frequency, etc. to the sound chip which then performs the requested operation. Rather, the SNES has a separate CPU (the SPU) which acts as a sound coprocessor: sound programs are written, assembled, and then executed on this coprocessor; once the program has been uploaded, the sound system can play (e.g. music) without any further involvement of the main CPU at all (I proved this a decade ago by showing that shorting between two pins on the SNES would crash the main CPU while the SPU continued to play the music without missing a beat), though obviously the main CPU must instruct the SPU when it's time to play dynamic sound effects.
We can only guess why the SNES was designed this way. The most obvious possibility is that this frees the main CPU from having to deal with music and sounds effects, leaving it more cycles to spend on something else (especially in the case where one or more channels of the music must be temporarily dropped to allow a sound effect to be played).
An alternate possibility that I haven't been able to confirm is that this is done to increase the resolution of the audio system. In Blaster Master, the game would perform all the calculations for the frame, then spin waiting for the vertical blank interrupt to begin work on the next frame. Thus code executed 60 times a second, apparently including audio code. If this is true in the general case, that limits the resolution of audio operations to 1/60 second as well. In contrast, the SPU runs at 1 MHz independent of the main CPU, allowing it to issue commands to the sound generator at any time, in theory allowing for higher music tempo and more complex audial effects.
Subscribe to:
Posts (Atom)