Search This Blog
Wednesday, November 30, 2005
There is indeed a mind-blowing story about collateral damage that needs to be told, but that story is one in which we honor the extraordinary achievement of the United States military: two years of combat since the fall of Baghdad, much of it urban warfare, with less than 1,000 civilians killed as a result of U.S. action.
What is the source for these numbers? The most comprehensive study of civilian casualties is available from a group opposed to the Coalition intervention in Iraq called Iraq Body Count. This summer, the Iraq Body Count project published an analysis of casualties in the Iraq War that must be admired for its meticulous documentation.
This study reports 24,865 civilian deaths in the first two years of the Iraq War, an apparent ringing endorsement of the "Iraq in chaos" position. But a curious statistical anomaly jumps right off page one: over 81% of the civilian casualties are men. Even stranger, over 90% of civilian casualties are adults in a country with a disproportionate percentage of the population under 18 (44.5%). This contradicts a basic tenet of the civilian casualty argument, namely that we are describing collateral damage during a time of war. Collateral damage does not differentiate between male and female, between child and adult. A defective smart bomb falling in a marketplace, stray bullets ripping through bedroom walls, city warfare in Fallujah – all these activities should produce casualties that reflect the ratio of men to women or adults to children that prevail in Iraq as a whole.
This question is particularly relevant when one side in the conflict does not wear uniforms, is predominantly adult and of one gender, and engages in a practice of concealing its combatants within the civilian population. The statistics are further distorted if the Iraqi security forces – essentially the free Iraqi military on the side of the U.S. coalition – are classified as civilians, as they are in this study.
At this point, I was feeling rather discouraged, as the file format wasn't anything recognizable. I found a really nice site that explains the various flavors of ADPCM; but alas, none of them described the format I was seeing.
That left an exceedingly painful alternative: reverse-engineer the game and find the decompression code. While I do know MIPS assembly language (which the PS2 uses), debugging an unfamiliar platform is hell.
Next idea: talk to nameless programmer friend who programs, among other things, the PS2. I explained the situation, and asked if he thought it was remotely feasible to reverse-engineer the thing. He doubted it. However, he offered an opinion even more valuable: he suggested it might be VAG, a hardware format. Now that was a format I'd never heard of before, other than seeing it listed in the list of formats MFAudio supports. I smell an opportunity...
I whipped out a random WAV file and ran it through MFAudio (which can encode as well as decode). While the header was obviously different (for reasons I wouldn't realize till later), the distinctive data block structure was evident in the generated VAG file.
That left one thing to do, to confirm: to try to splice the AUS file data into the VAG file, and see if MFAudio could play it. I deleted the VAG file dat
Encouraged by a solid 3,300 user responses to its Desktop Linux survey, the Open Source Development Labs (OSDL) Desktop Linux Working Group (DTL) Tuesday thanked all its respondents by email and began sifting through the mountain of data the survey provided.
The month-long online survey focused on determining the key issues driving Linux on the desktop as well as the major barriers to Linux desktop adoption, OSDL officials said.
Tuesday, November 29, 2005
The first paragraph is pretty much how I feel about the matter. Nobody asks to get raped, but certain things (wearing skanky clothes, getting drunk on a date, etc.) are playing with fire. The rapist is always the Bad Guy ™; there's no question about that. But only a complete idiot would hand a Bad Guy another chance to do something bad!UPDATE: Perhaps I should clarify something confusing, a little. It sounds like me saying that being raped is not the fault of the victim is contradictory to me saying that the victim was playing with fire. Here's what I mean: I believe that it takes a special disposition (either by nature or by nurture) to rape. I don't believe that people lacking this disposition will end up raping girls just because they were wearing skanky clothes. I do believe, however, that someone disposed to rape will be more likely to rape a girl like that. It's not the girl's fault that the guy was disposed to rape, but it wasn't very bright to intentionally do something that increased the probability of being raped, either.
Imagine you're driving through a traffic light. The light is totally green, and you're following every letter of the law. Then some psycho goes zinging through the intersection, unmistakably running the red light, and is headed straight for you. You have two options: continue, reassured that the guy is completely in the wrong, and the accident will be his fault, or slam on the brakes and avoid the accident completely (let's assume you still have time to do so)? You'd have to be a friggin' idiot to do the former, yet people try to justify that in things like this topic. That you didn't stop doesn't excuse the law breaker - the guy that ran the red light - but the fact is that you could have prevented it and you didn't. And with something as painful as rape (or car accidents, for that matter), do you think you'll CARE that it wasn't your fault, after it happens?
To me, date rape is something of another beast. I consider rape to be, by definition, one person forcing sex on another, when they know that the other is not willing. The real distinction of date rape is that, while common rape is pretty clear about what happened (it's extraordinarily rare for a woman to consent to sex with someone she knows nothing about, and isn't even on a date with), date rape is significantly more muddy, as it's very difficult to prove that it meets that definition. That men tend to misunderstand female signals as sexual invitation (and not understand when 'no' means 'no', especially when hormone-crazed) is thoroughly documented in social psychology, and things get even more difficult if the girl had previously consented to sex with the guy (as it makes it that much more difficult to tell whether she meant no, or was just playing*).
This is one of the definitions for arguing lack of responsibility for a crime in court: when the perpetrator did not, at the time of committing the crime, have the ability to tell right from wrong. If the guy doesn't know the girl doesn't want to have sex (in his mind, they're having consensual sex), how can you say that he could tell what he was doing was wrong (as most people would not consider consensual sex in and of itself a crime)?
This, of course, leads to even more sticky issues. Even if the guy didn't know that the girl wasn't willing, the (substantial) damage was still done, to the girl. What do you do with a real victim without a real criminal?
* And even worse is the (halfway commonly held) belief that girls that say (and perhaps mean) no will still enjoy the sex once things get going (a common porn scenario). Although this doesn't fall under the same category as mistaking 'no' as play, as in this case the guy may know, at the time, that the girl does not want sex (and thus meets the definition of rape).
2. Post this quote: "And on another note, to the subset of moral relativists who are communists, socialists, and other leftists, who believe that no one person can have a claim on any property, then how can a woman object if a rapist decides to make use of that which belongs to him?"
3. Sit back and enjoy the show
Monday, November 28, 2005
Now, what do you suppose is responsible for that Gord-awful distortion? The Ogg encoder? Nope. My decompression code? I wish. The AUS encoder? Also negative. No, ladies and gentlemen, that is the sound of some idiot at Capcom recording this track at such high volume that a good 15-20% of the samples get clipped to fit in a signed 16-bit integer.
How exactly did this get past quality control? Even a deaf person could have told you this would sound funny, just from looking at the wave form.
Sunday, November 27, 2005
Saturday, November 26, 2005
Well, fortunately I was too stubborn (or perhaps bored) to quit. So, I continued the next day (only had like 2 hours to work on it on Monday). Scouring the archives for information, I found several general types of files: 'ASF ', 'AUS ', and a large file type that had no header tag. So, where's the music?
Perhaps rather aimlessly, I began searching through the binaries, looking for some piece of information that might lead me to the music. As luck would have it, that's exactly what I found. The strings "SELECT JUNGLE" and "FrontEnd/Music/Jungle.aus" in close proximity. The former I had seen before - it was in the secrets menu in the game, and played a remix of one of the music tracks. This offered pretty convincing evidence that the AUS files were the ones I was looking for.
Naturally, my next step was to extract a couple of them and look at the format. While it was nothing I recognized, and had no apparent waveforms (and searching for AUS format on various sites yielded no information), the file format was striking: rows and rows of 16-byte data blocks. The fact that the blocks were 16 bytes large was obvious, due to the near invariance displayed by the first two bytes of each block. This immediately made me think of ADPCM, as some variants of it used 16-byte blocks of data. However, the format didn't resemble any ADPCM variant I'd seen before; nor did any of the couple dozen audio formats I tried saving with Cool Edit Pro have the striking block structure.
I again went searching the web, looking for a decoder. This time I searched for any type of PS2 audio file player, hoping that perhaps it was a common compression format in a different package (the AUS file). I happened upon the Mozzle Flash (MFAudio) player, which claimed to play several different game audio formats. I was disappointed to see that the only formats it would attempt to play without the proper file header were generic ADPCM and PCM. But I supposed that I should at least give the ADPCM a shot at the data.
Much to my surprise, music came out! Not only that, but loud music; fortunately, I had turned my sound way down, on the chance that it would play garbage and damage my speakers. Despite the obnoxious volume, it was playing music from the game, and I recognized it. Unfortunately, it wasn't playing it perfectly; crackles and distortions were clearly audible. Now what?
Friday, November 25, 2005
After listening to it for a bit, I realized it was remixed versions of the original music in something resembling MIDI quality. As I used to collect video game music, I wanted it in my (MP3) collection. However, I'm a lot lazier than I used to be (back when I used to record music directly from the consoles), and I wanted an easier way. Particularly because of the fact that I'd heard the music, after several minutes, fade out and restart from the beginning; this was a strong indication that the game was using digital (and thus fairly easy to rip) music. So began the hunt.
Sticking the DVD into my friend's computer (who was preoccupied playing Dragon Quest 8 for about a week before and a couple days after this) turned up the basic PS2 configuration files (SYSTEM, SLUS, etc.), a number of IRX modules, and 13 AIF files. No XA files (for those who aren't familiar with them, the XA extension refers to CDXA format files, a multi-stream digital ADPCM compressed format popular with Playstation games), which I originally expected to find. Okay, now what?
The absence of anything else, and the fact that the AIF files consumed 3.5 gigs of the DVD, made it probable that they were archives. A quick look at the files in a hex editor seemed to agree. As you can see in the picture, there's a table of 16-byte structures with 5 apparent fields in little endian order (three 32-bit fields followed by two 16-bit fields). As well, the first 4 bytes of the file listed the offset of the end of the table; this was probably a file table.
Given the fact that the second 32-bit field of the file table entry was generally always equal to the second 32-bit field of the previous entry added to the third 32-bit field of the previous entry agreed with this; it seemed as though the second field was the file offset, and the third the file size. This was confirmed by following some of the file offsets and finding what appeared to be file headers; in addition, this also made it apparent that the files in the archives were neither encrypted nor compressed. The lack of any type of pattern in the first 32-bit field, and the complete absence of any file names in the archive, made me think this was a file name hash.
My first thought was to scan the archives looking for XA files. So I did a text search for 'CDXA', the format tag of the CDXA format. No dice. I then tried searching for 'RIFF', the tag of RIFF format files (of which CDXA files were). This turned up two matches: one apparently in an executable, and the other of an RIFF/WAVE file (standard issue .WAV file). I followed the offset back to the file table, and cut and pasted the WAV file, and played it. From the sound, it seemed to be background music for the title screen, nothing more.
As well, in the process of various text searches, I'd found strings referring to various files with names such as BGM.XA. This made me wonder if the files were stored outside the DVD file system. I don't really know why you would do that, but I've seen this done in other games, before. So, I whipped out Visual Studio 2003 and MSDN library, and started coding a text searcher. This one would open the DVD drive as a volume (look at CreateFile for information about this), then search for the text. In the process I amused myself by writing my first ever complete program using asynchronous I/O, which used dual read buffers to read a block while searching the other. But in the end, it was futile. No occurrences of 'CDXA' or 'RIFF' were found outside those found in the archive files.
Okay, now what?
Thursday, November 24, 2005
The simplest format of instructions is the long immediate format. In this format, the high 6 bits contain the opcode for the instruction, and the remaining 26 bits contain the long (signed) immediate. This format is used primarily in conditional branch instructions, in which the immediate represents the relative address of the branch target, and the opcode indicates the condition being tested.
Next is the short immediate format. In this format, the high 6 bits contain the opcode, the next 5 bits contain the destination register index, the next 5 bits the source register index, and the final 16 bits contain the short (signed) immediate. This format is used for all instructions that take a register and an immediate as parameters, such as load and store instructions (which add the immediate to the value of the source register to form the address for the operation) and math operations that take an immediate value.
Last is the register format. Just like the short immediate format, the top 16 bits contain the opcode, destination register, and first source register, respectively. After that, 5 bits contain the second source register, the next 5 bits the second destination register, and the last 6 bits contain the extended opcode. In this instruction format, the primary opcode is always 0, indicating that this is a register format instruction, and the extended opcode indicating the operation to be performed; this was chosen to allow a greater number of instructions when possible.
The second source register is used in any instruction that takes two inputs, with neither being an immediate. The second destination register is used only in instructions that have two outputs; right now the only instructions which do are the multiply (64-bit result) and divide (32-bit quotient and 32-bit remainder) instructions.
If this instruction format looks familiar (to, say, MIPS), that's probably because I've been studying MIPS all semester in my Low Level Languages class, which handily coincides with the time that I've been designing the Q1. Nevertheless, a lot of it is just common sense. The maximum that can be stored in one instruction is a 16-bit immediate and two 5-bit register indices, leaving 6 bits for the opcode. As well, in each case the order of instruction fields is such that the maximum amount of similarity between formats is achieved, minimizing the decoding hardware necessary.
Wednesday, November 23, 2005
Q1 will use a mix-and-match of features from the x86 and PPC. Conditions and overflow are both handled by means of a condition register, with flags for carry (unsigned overflow), overflow (signed overflow), signed (negative) result, and zero result. This condition register will be set only by versions of math and binary instructions that set the condition register (add!, sub!, and!, or!, xor!, nor!).
I decided on this method because I consider it too slow and cumbersome to have to manually determine whether overflow or carry has occurred, or whether a comparison of two numbers is true. As well, exceptions are too slow to execute; not only that, but to support both carry and overflow exceptions, there would have to be separate signed and unsigned instructions for every math operation.
I also considered making add and subtract operations 4-register operations (two inputs, and two outputs forming a doubleword result), which would have made it very easy to do chain math operations of values larger than the word; while this is a neat idea, it seemed impractical, as not only would it have required signed and unsigned variants of those operations (so that the Q1 would be able to determine whether the high word should be 1 or -1 if a carry occurs), but it would have made comparisons against zero more difficult.
Q1 supports two methods of handling conditions, once the condition register has been set. First, it supports conditional jumps for carry/unsigned less than, overflow/signed less than, unsigned greater than, signed greater than, signed result, and zero result. It also supports conditional moves that are 3-register operations - the destination register will be set to one value (in another register) if the condition is true, or a second value if it is false. I may also add an instruction to invert the condition register flags; I'm still thinking about that.
To me, conditional moves were a necessity, for speed reasons. Any conditional branch has the potential to be slow, with that potential directly proportional to the frequency of the less taken branch; conditional moves do not have that possibility. However, if you think about it, it's logically possible to implement conditional branches without any conditional branch instructions at all: perform a conditional move with the two target addresses, then do an unconditional branch. While that would have cut down the number of instructions in the Q1 by half a dozen, I thought it would be too slow. A conditional branch takes only a single instruction, while using a conditional move in that way requires four: two loads to load the target addresses, the conditional move, and the unconditional branch.
Tuesday, November 22, 2005
As has been mentioned previously, I spend a substantial amount of time playing World of Warcraft (WoW), Blizzard's MMORPG. More than anything else, I like playing with my friends. However, as is especially the case with friends who have a very limited amount of time they can play (or only use a single character), sometimes they play without me (I play on many chars, so it's rarely a problem the other way around). For catching up, I've developed a strategy, one that has left two of my friends with blank (uncomprehending) stares, thus far; so, I'll explain the math behind it, here.
First, let me give a brief summary of the relevant features of WoW, for those who haven't played it:
- Enemies near your level give experience (XP) when you kill them
- When in a party, XP for kills (only kills) is divided by the number of players in the party
- Quests come in many shapes and sizes; kill X number of Y, and collect X number of Y, where Y drops at some frequency from enemy Z are two examples
- Quests give XP when you complete them
- Each quest can generally only be completed once per character
- Thus, quest rewards are not a good way of playing catch-up, as the person you're playing with will have to do them in the future, and you won't have gained anything
- Grinding (killing enemies without any purpose other than to get XP) is boring
So, here's my strategy: when playing catch-up solo, do quests that require collection of items that drop off enemies. If you think about it, you can imagine the reasoning for my friends' skepticism: if you have to kill Y (the number of people in the party) times as many enemies, each giving 1/Y XP when in a party, shouldn't that mean that you get the same amount of XP doing the quest solo as when you do it in a group?
No. And here's why. While it's true that you will get the same amount of XP from the enemies when you do the quest, remember that the people you're playing with still need to do the quest. If you tag along with them, not only will you get the XP of when you did it solo, but you will also get a proportionate share of the XP from the party (100% * XP + 100%/Y XP). And by doing so, you decrease the amount of XP the other party members get, proportionally (100% * (Y - 1)/Y XP). This comes out, for example, to a 150%/50% (% of the amount of XP for doing the quest solo) or 3:1 split between you and your companion, if in a group of two (166%/66%/66%, 5:2:2, for three, etc.).
And on a completely unrelated note, there's a term in psychology called the hindsight bias. It describes the tendency of people who know the solution to a problem (especially in the case of nontrivial problems) to think that the answer was unavoidably obvious, even when the problem is difficult enough that it is likely that they themselves could not have solved it. Also known as the "I could have told you that" syndrome. A prime example of this is the media and others' response to the "intelligence failures" that prevented the 9/11 attacks on the World Trade Center from being stopped, despite the previously obtained evidence that the attack was coming.
Monday, November 21, 2005
MIPS treats signed overflow (but not unsigned carry, which it provides no mechanism for detecting) as an exception. When an arithmetic instruction generates signed overflow, an overflow exception is generated, and the exception handler is called. Separate unsigned arithmetic instructions exist, which will not throw overflow exceptions.
Conditions, on the other hand, are implemented by a series of conditional branch instructions: beq (branch if two values are equal), bne (branch of two values are not equal), bltz (branch if value is less than zero), blez (branch if value is less than or equal to zero), bgtz (branch if greater than zero), bgez (branch if greater than or equal to zero).
While overflow exceptions can be convenient, this method has many shortcomings. First, comparisons of two values is cumbersome and slow, as they must be performed using a number of instructions. Testing for carry is similarly slow, and also requires multiple instructions. Finally, exceptions are slow. Even in a single-tasking system (like the Playstation, which uses a MIPS CPU), where the OS doesn't need to do complicated exception handling before control returns to the program (I benchmarked this taking more than 100,000 cycles on my NT computer), if the exception handler is called in kernel mode (as is the case for MIPS, x86, etc.), a full kernel mode transition is still required before the user mode handler (i.e. the catch block) can get invoked (I don't know about MIPS, but on x86 this kind of thing can take hundreds of cycles). Compare this to the worst case scenario in a Pentium 4 (the worst performing CPU I know of, with respect to unpredicted branches), where an incorrectly predicted branch can stall the CPU for 29 cycles.
x86 uses perhaps the most obvious method of handling conditions and overflows: a condition register. This register has flags for a wide variety of conditions, including carry, overflow, zero, signed, all four being set (or reset, as the case may be) by math and binary (and, or, etc.) instructions. In addition (and likely on account of the fact that the x86 only has 8 registers), x86 has two comparison instructions: CMP, which is equivalent to a subtraction, save that the result is not written to any register (thus conserving a register, while setting the flags from the operation), and TEST, which performs a binary and, then discards the result.
x86 offers three ways of responding to conditions. First, conditional branches allow for branching based on various conditions, such as greater than, less than, carry, signed, etc. As well, conditional set instructions set a register depending on whether the condition is true (1) or false (0); this is commonly used for complex boolean algebra expressions. Finally, conditional move instructions perform a move only if the condition is true. The conditional set and condition move instructions are of particular value, as they allow actions other than branches (which can be mispredicted) to be taken based on conditions.
PowerPC uses a similar but simpler method of handling conditions and overflow. It also uses a condition register, comparison instructions (similar to the x86 CMP command), and conditional branches, but does not support conditional moves or sets. What is noteworthy, however, is that each math and logical instruction comes in two flavors: those that set the condition register, and those that don't. This allows other math operations to come between the condition register being set and the action taken as a result.
Sunday, November 20, 2005
The reason I put overflow and conditions under the same heading is that conditions are also based on carry overflow. If we compare two values, one of the following must be true: they are equal, the first is less than the second, or the first is greater than the second. Computers perform this comparison using subtraction, then checking for overflow. Compare unsigned 5 and 10 (in that order): 5 - 10 = -5 (a nonunsigned result) with carry. If we reverse these, 10 - 5 = 5, with no carry.
Thus, a carry indicates that the first is less than the second (this is always true, not just in these two examples). In the case of both values being equal, the result will be 0. Note that the assumption that carry indicates less than, and no carry indicates greater than, is only valid when the result is nonzero.
Signed comparisons are a bit more complicated, as it's not as simple as whether or not there is overflow. I won't go into examples of why this is (as you'd have to construct a full truth table to see the relationships), but this is how it works: excluding the case of both being equal, then the first value is less than the second if the overflow state is different than the sign of the result of the subtraction (overflow != result_sign) .
Saturday, November 19, 2005
Take the case of the unsigned addition of 0xFFFFFFFF and 0xFFFFFFFF (the largest possible numbers). The correct result of this addition is 0x1FFFFFFFE. However, this result requires 33 bit, and is thus truncated to 0xFFFFFFFE when placed in a register; one bit of significant data is loss.
Now, at the risk of confusing you, I should make it clear that the lost 33rd bit is not always significant. Take, for example, the subtraction of 5 from 10. In two's complement math this is performed by negating the 5 (to get 0xFFFFFFFB) and then adding it to 10. The result of this is 0x100000005, which is 33 bits. In this case, one bit is lost, but it contains no actual information. A single example such as this isn't sufficient to prove it is so, so I'll tell you straight out: for unsigned subtraction, overflow occurs if and only if there is NO loss of the 33rd bit - exactly the opposite of the case for addition.
However, it gets even more complicated. Consider the signed addition of 0x40000000 and 0x50000000. Both of these numbers are positive, so the result must also be positive. However, the result of addition is 0x90000000; the fact that the highest bit has been set indicates that the number is negative. Overflow has occurred, even though the 33rd bit hasn't been touched. Now consider the addition of -1 and -1 (0xFFFFFFFF). In this case the result is 0x1FFFFFFFE, or 0xFFFFFFFE (-2) when truncated. Here, the 33rd bit is lost, but no overflow has occurred.
What this means is that there are different methods of detecting overflow for signed and unsigned arithmetic. Unsigned arithmetic is fairly simple: if ((33rd_bit != 0) != is_subtraction), overflow has occurred. For signed arithmetic, it's more complicated. First of all, let me tell you that this equation, although it appears in some computer architecture books (like mine), is NOT correct: if (33rd_bit != 32nd_bit) overflow. The correct equation is: if ((32nd_bit_of_operand_1 == 32rd_bit_of_operand_2) && (32nd_bit_of_result != 32nd_bit_of_operand_1)) overflow. In other words, if both operands have the same sign (remember that with subtraction one operand will be negated; this must be taken into account), but the sign of the result is different, then overflow has occurred. Traditionally, unsigned overflow is referred to as a carry, and signed overflow is referred to as overflow (neither of which are particularly good names).
Friday, November 18, 2005
Thursday, November 17, 2005
Incidentally, the Q1 emulation core is now done. It currently weighs in at 1.8 KB, but I may be able to shrink it some. So far all the optimization has been stuff I've done as I've coded. Now that I'm all done, I can go back and look for new things to optimize.
The creator of the copy-protection software, a British company called First 4 Internet, said the cloaking mechanism was not a risk, and that its team worked closely with big antivirus companies such as Symantec to ensure that was the case. The cloaking function was aimed at making it difficult, though not impossible, to hack the content protection in ways that have been simple in similar products, the company said.So the antivirus companies were working with the maker of the rootkit to begin with? Hope you brought some cool clothes, because it's hot where we're going.
So, I was looking for articles on gender roles for a psychology class paper. Well, I found something really interesting, because it totally NOT what I was expecting to find. First of all, about the person being interviewed:
"Dr. Anne Fousto-Sterling, 56, a professor of biology and women's studies at Brown... lesbian... Her 1985 book, 'Myths of Gender: Biological Theories About Women and Men,' is used in women's studies courses throughout the country."
Q. Among gay people, there is a tendency to embrace a genetic explanation of homosexuality. Why is that?
A. It's a popular idea with gay men. Less so with gay women. That may be because the genesis of homosexuality appears to be different for men than women. I think gay men also face a particularly difficult psychological situation because they are seen as embracing something hated in our culture - the feminine - and so they'd better come up with a good reason for what they're doing.
Gay women, on the other hand, are seen as, rightly or wrongly, embracing something our culture values highly - masculinity. Now that whole analysis that gay men are feminine and gay women are masculine is itself open to big question, but it provides a cop-out and an area of relief. You know, "It's not my fault, you have to love me anyway."
It provides the disapproving relatives with an excuse: "It's not my fault, I didn't raise 'em wrong." It provides a legal argument that is, at the moment, actually having some sway in court. For me, it's a very shaky place. It's bad science and bad politics. It seems to me that the way we consider homosexuality in our culture is an ethical and moral question.
The biology here is poorly understood. The best controlled studies performed to measure genetic contributions to homosexuality say that 50 percent of what goes into making a person homosexual is genetic. That means 50 percent is not. And while everyone is very excited about genes, we are clueless about the equally important nongenetic contributions.
Q. Why do you suppose lesbians have been less accepting than gay men about genetics as the explanation for homosexuality?
A. I think most lesbians have more of a sense of the cultural component in making us who we are. If you look at many lesbians' life histories, you will often find extensive heterosexual experiences. They often feel they've made a choice. I also think lesbians face something that males don't: at the end of the day, they still have to be a woman in a world run by men. All of that makes them very conscious of complexity.
At a conference for its management software customers, company executives detailed its plans to add support 64-bit microprocessors in its server applications and operating systems.Frankly, I was dissappointed when MS announced that Longhorn would run on x86-32 at all. Now that x86-64 CPUs are starting to appear on the desktop, and should be the majority by the time Longhorn ships, having Longhorn only run on x86-64 would have drastically simplified application design. But I guess something is better than nothing.
By late next year, Microsoft expects to deliver Exchange 12, which will run only on x86-compatible 64-bit servers, said Bob Kelly, general manager of infrastructure server marketing at Microsoft.
Kelly said 64-bit chips will make the greatest impact on the performance of applications such as Exchange and its SQL Server database.
"IT professionals will be able to consolidate the total number of servers running 64-bit (processors) and users will be able to have bigger mailbox size," he said.
Longhorn Server R2 and a small-business edition of Longhorn Server will be available only for x86-compatible 64-bit chips as well the company's Centro mid-market bundle. Longhorn server is expected to be released in 2007 and the R2 follow-up could come two years after that.
Tuesday, November 15, 2005
UPDATE: Now it's up to 24 instructions (half the instruction set). The emulation core is about 2/3 KB of optimized assembly. Should be no problem keeping the emulation function and the CPU context (most importantly the registers) in L1 cache, making it about as fast as possible.
in the latest wow patch, they finally admit to searching your computer for viruses and cheats
Quantam (11:17 PM) :
I saw that
durandal255 (11:17 PM) :
it pokes through my IE history!
Quantam (11:17 PM) :
That's not cool
durandal255 (11:17 PM) :
god knows what it does after that!
durandal255 (11:18 PM) :
it also looks at your autoexec.bat, and um, your start menu and desktop shortcuts
Quantam (11:18 PM) :
durandal255 (11:18 PM) :
Quantam (11:18 PM) :
I bet MS' malicious software removal tool does less than that
durandal255 (11:19 PM) :
durandal255 (11:19 PM) :
it looks at ntuser.log and both the 9x and NT temp folders
durandal255 (11:20 PM) :
i bet i'd find it inspecting my MBR if i knew how to look for that
Quantam (11:20 PM) :
If you've got any digital signatures using MD5, I suggest you FIX THEM, NOW! (not that I know exactly what to use; SHA-1 is on its last leg)
UPDATE: Fortunately, it isn't as bad as I thought it was. This program can only produce two randomly generated messages that hash to the same value; it cannot find a new message that matches a given hash. No complete security meltdown yet, but you could still safely say that MD5 is no longer safe to use.
Now that there's a formal debate forum (requires registration to view/post in) with more strict rules for posts, I've begun to periodically start debates (in fact I have a list of four or so I plan to start in the foreseeable future). The flavor of the week is the nature of indirect responsibility for something. The opening post (which is just to get the thinking started, before the debate begins):
This is, to my knowledge, a fictitious story (although it would hardly surprise me if sometime, somewhere in history it actually happened).A more recent post, which introduces the debate itself:
There once was a husband and wife. The husband worked nights, and the wife frequently became lonely, and went out to meet lovers during the night, although she always returned home before her husband. The wife always cut off the relationships if the lovers wanted it to get serious (as in, endangering her marriage).
One night, she was doing just that: dumping a lover. She had just left the lover's apartment, and was about to go home, when she realized she didn't have money for the ferry she would have to take to get back to her house. Reluctantly, she went back and asked the lover if she could borrow some money. Not surprisingly, the lover slammed the door in her face.
She then went and asked her previous lover (call him #2) for money, who lived nearby. He also slammed the door in her face. So she went back to the ferry, and begged the ferry operator to let her ride for free, and she would pay him back. He refused.
Finally, she remembered there was a bridge a ways away, but she thought she could still make it home in time. However, this bridge was known as being a dangerous area, especially at night. So, she takes the bridge, and, as luck would have it, gets mugged. Angered that she did not have any money, the mugger stabs her, and she dies.
Now, how would you assign blame for the death of the woman? Rank the six characters (the husband, the wife, lover #1, lover #2, the ferry operator, and the mugger) from most responsible to least responsible in your post.
Well, where I was hoping to go with this was a debate about what constitutes responsibility for something like this.Do not think to reply here. If you want to join the debate (which is the whole point of me posting about it), go to the debate itself.
As for myself, I'd say the blame belongs first and foremost to the mugger, as the mugger is the one who actually killed her. But I don't think it's correct to say the woman didn't contribute to it. She made several choices that contributed directly or indirectly to her death. In chronological order:
- She chose to be out there having an affair. While getting killed by a mugger is not a foreseeable outcome of having an affair, I have little sympathy for people who get hurt or killed as a result of perpetrating some crime (in this case the crime is a moral one; as I said a couple posts up, I consider being faithful to your spouse part of the job description for anyone who's married). If a terrorist gets blown up due to a bomb malfunction while trying to bomb some place, all I'm going to say is "Haha, loser!"
- She chose to take a dangerous route in the middle of the night. While that isn't to say she was "asking" to be killed, the fact remains that when you do something fairly dangerous (and you have viable alternatives), you have to take some responsibility for the plausible, predictable outcomes, of which this was one. If I'm welding something while not paying attention, and I end up burning myself (a plausible, predictable outcome for welding carelessly), it's my fault for not being more careful. If, on the other hand, the propane tank explodes and kills me due to some manufacturing defect (neither a plausible nor predictable outcome), that's the manufacturer's fault.
In other news, the usual response to that story (it's commonly used in college psychology classes) is that about half the people blame the woman primarily, and the other half the mugger. I guess this goes to show that you become more conservative with education...
UPDATE: As those of you (assuming there are any of you out there reading this blog) probably noticed, the Star Alliance site went down a day after I posted this entry, and has been down ever since. Seems they had some problems with their host, and are in the process of relocating. I'll try to remember to post when they come back up.
Saturday, November 12, 2005
Friday, November 11, 2005
Back when I made the post about Singularity, I sent Merlin (of Camelot Systems fame, and who now works as a coder for Microsoft) the link, to ask what he thought of Singularity. He provided me with some food for thought, although I didn't get around to writing about it until now (story of my life...).
His overall conclusion of Singularity was that the idea was 'idiotic'. He had two reasons for this conclusion. First, he claimed that the quality of the JITer is not sufficient for this kind of thing, given that the JITer becomes the single most important piece of software in Singularity, with respect to security and stability (as I said in my post).
Second, he claims that the idea of the JITer being the gatekeeper to system security is fundamentally flawed in that it can't control the hardware. It can certainly ensure that software doesn't have access to the hardware, and that drivers communicate only in well-defined (and legal) ways, but the JITer has no way to verify that the data drivers actually send to the hardware is valid. Even with a JITed system, it's possible a driver might give the wrong address or buffer size to the hardware, and the hardware writes to it, corrupting program or system data (or even worse).
This second point is particularly valid, as I've seen first-hand (my knowledge of the JITer itself is insufficient to comment on the first point). Take a little library I was writing called DD3D (that's 'DirectDraw 3D') as an example. It was a little library that displays a DirectDraw surface as a Direct3D texture map. The test program would recreate the Direct3D device every time you resized the window, so that it could use the right size of back buffer (for optimal image quality). This meant frequent destruction and creation of Direct3D devices. Well, as it turned out, the program initially had a reference count leak that prevented the Direct3D device from actually being destroyed before another one was created; Direct3D even complied with the requests to create new devices.
Eventually, this exhausted some system resource, and it broke. And by 'broke' I don't mean it threw a "screw you, I'm not making any more devices" error (which would have been an appropriate response in this situation). Nor did the program crash, or even blue-screen. Nope; once it got above some number of Direct3D devices created, it hard-reset the computer. That is, blammo, black screen, "testing memory", "press
So, maybe this isn't such a viable idea after all.
Also, on a totally unrelated note, I seem to be accumulating quite a harem in my Temp tab of my ICQ contact list (where I put all the people who spontaneously add me to their contact list with no prior contact). 11 girls and counting (and those are the ones that aren't porn bots - I've broken half a dozen porn bots this week alone with my first reply); apparently 'Justin' is a popular name girls from countries all over the planet search for to find people to chat with (one of them said that's how she found me). *shakes head* Heck, at least half of them have never even messaged me.
- Only method that allows threads to wait until the operation completes
- Not useful in most other cases
- Potentially high latency on POSIX
Asynchronous Procedure Calls
- Calls only occur in the thread that requests the I/O
- Calls can be deferred until the thread is ready for them
- Most convenient method for single-threaded programs
- Must be polled for in the thread that requested the I/O
- Potentially high latency on POSIX
I/O Completion Ports
- Potentially fastest (highest throughput), most scalable method on multi-CPU systems, due to optimized thread pooling architecture
- Cumbersome to use, as often requires construction of a finite state machine
- Not particularly suitable for single-threaded programs
- Potentially high latency on POSIX
- Lowest latency method on POSIX
- Potentially fast on multi-CPU systems, if the OS does CPU load balancing of callbacks
- Calls may occur in any thread at any time
- Must be polled for in the thread that requested the I/O
Wednesday, November 09, 2005
Thus, the practical difference between APCs and unpredictable callbacks is that unpredictable callback functions have to be thread-safe. If they access any shared data, it must be protected by thread synchronization (with APCs you could sometimes get away without thread safety, if the "shared" data was only used by the thread that started the I/O). Of course, because it's possible that unpredictable callbacks will be implemented as APCs, it's still necessary to regularly dispatch APCs for the threads that request I/O.
So, what's the point? Seems like a lot of work for a lot of uncertainty. Well, the usual answer for questions like that about LibQ is speed. Windows implements APCs natively; POSIX does not. Instead, POSIX implements asynchronous I/O notifications via signals, one form of which is unpredictable callbacks. All other types of notification can be readily emulated using POSIX callbacks (as they're the single most flexible method of asynchronous I/O notification), but this comes at a speed cost.
The speed cost itself isn't very large (several dozen cycles), but the latency introduced is much worse (could be hundreds of milliseconds, in the worst case). For situations where a low latency response is needed, this might be too much. If, on the other hand, latency isn't important, other methods might be more convenient (or even faster, i.e. completion ports).
Tuesday, November 08, 2005
*crosses item off life todo list*
That'll teach her not to be so ambiguous in playing her role that people had to ask what she was supposed to be (and nymphomaniac was the first thing that came to my mind after the first couple things she said). :P Was also hilarious when the nerd listed Screech Powers as his hero.
And for your information, the purpose of that exercise was to demonstrate the stereotypes associated with various roles.
BahamutZero's response to this post: "I do think I said you were a devious, corrupt, manipulative and all around dangerous person."
UPDATE: Today (Thursday) in psychology class the teacher was asking for attributes (taken from a list she passed out) that we thought were more typical of women than men. I answered 'tact'. I heard several people chuckle; I have a guess as to why :P
Friday, November 04, 2005
First, we have Sony installing a rootkit on the computers of anyone (with admin privileges) that puts the Get Right With the Man CD in their drive. This rootkit is a driver that hides itself from detection by hooking the Windows system call table and preventing any files with file names beginning with "$sys$" from showing up in Explorer or anywhere else (you can readily test for the presence of this rootkit by renaming a file that way, and observing if it disappears). After the public outrage from the Slashdot readers and others, Sony released a none-too-effective uninstaller.
In the same week (at least for me), news of the Warden got around. The Warden is Blizzard's anti-hacking tool for World of Warcraft (in the legacy of Work, Blizzard's neato hack detector for Starcraft, Diablo II, and Warcraft III). This one has the enjoyable function of scanning the programs running on your computer, and sending such things as the title of open windows to Blizzard.
Finally, in a move of minor brilliance (and what makes an ideal final entry in summary posts such as this), hackers decide that it would be worth their time to use one to thwart the other; that is, to use the Sony rootkit to hide their WoW hacks from the Warden. Looks like it's gonna be a war between the video game and music industries for who's responsible for this mess.
I'm only replying to the parent so that this post is high up the screen.
Look at page 31 of this PDF. Microsoft publish benchmark statistics showing Linux (and FreeBSD) to be better than Windows.
Okay, so this post is so important he decided to ignore posting etiquette. The post refers to a table of benchmarks that shows the number of cycles needed for each of 6 things, on Singularity, XP, FreeBSD, and Linux. If we ignore Singularity, which has the lowest - and thus best - scores in 5 of 6 categories, Windows XP holds the lowest score in 3 categories, Linux in 3 categories, and FreeBSD in none (however, FreeBSD does have a lower score than XP in 1 category). As far as proof of Linux/FreeBSD superiority goes, that's pretty underwhelming (and while I could be grossly ignorant, I don't recall MS ever claiming that Windows was superior to Linux on every single data point).
Of course, these are simply statistics showing off the abilities of Singularity (that's just common sense - when you go to great lengths to make something faster than its competitors, you want to show that it's faster than its competitors), and a much too small sample size to draw any kind of conclusions.
Even more disturbing is that most of the replies to this post are along the lines of "Well, duh. Everybody knows that Windows blows; MS just finally stopped lying about it." And they wonder why Slashdot has a reputation for being a bunch of fanatic Linux zealots who couldn't think rationally if their lives depended on it...
Thursday, November 03, 2005
This thing is pretty sweet. It's like .NET (or Java) applied to an entire computer (OS, drivers, and applications), and then some new ideas. From what I've read, there are two basic ideas that set Singularity apart from any existing production OS. First, the entire system, with the exception of the microkernel, is JITed code. This is a huge benefit because it allows the JITer to verify that the code is safe before it ever gets executed. In a garbage-collected language without pointers, this means no more access violations or buffer overflows, period. It also means the code can't pull exploits like those that can give elevated permissions, or screw up some other thread/process' data.
In fact, because the OS audits all code before it ever gets executed, there's no need for multiple processes at all; indeed, all logical processes in Singularity run in the same virtual address space (commonly known as a 'process' on today's OSs; and in theory you could have everything running in kernel mode and it would still be safe). The fact that code can be guaranteed to be well-behaved on load also removes the need for most (but not all) checks for things like parameter validity, access control, etc., making programs run faster than has ever been possible.
The other major premise of Singularity is strict modularization. All code exists in its own "sandbox". Code is loaded as a unit, either as an application, with multiple code modules, or as a single library. Once a sandbox is created, no new code can be loaded in it.
However, it's possible to call from one sandbox to another. This communication is governed by interface metadata that dictates exactly what is and is not allowed, and is mediated by the JITer. While inter-process communication (IPC) has always been painfully slow in the past, it is not so in this case. Because all code is JITed, the JITer can verify that new code follows the rules, then give it direct access to what it's trying to reach, creating 0-overhead IPC.
Unfortunately, as MS has so much invested in Windows already, it's not actually looking to make this into an actual product. However, I think it's a really promising idea, and hope that somebody at some point will try to make a commercial OS based on this kind of thing.
Oh, and on a completely unrelated note, that overview paper has a benchmark comparing the speed creating processes on various OSs (one of the things I'd been wondering for a while): 5.4 million cycles for XP, compared to Linux's 720k.