So, in between work, playing WoW (actually very little, these last few weeks), playing MechCommander 2, and watching anime (Bokurano, Lucky Star, D.Gray-Man, and Death Note), I've been researching Japanese adjectives and verbs a bit. You could call this an errata post on the topic.
Hmm, where to start? Well, first of all, I was a little bit off about the difference between the verb of a sentence and the verb of a relative clauses being identical. I was right in that there isn't a difference between the forms in modern spoken Japanese. However, according to An Historical Grammar of Japanese (don't ask me why it's 'an'; I don't know), the forms of verbs and adjectives in those two roles used to differ for some verbs (there are several conjugation patterns formed by progressive breakdown of the original grammar) in the written form of Japanese.
Japanese is a bit different than English in that there are several forms of it, spoken and written Japanese being the ones most relevant to us. Compared to spoken Japanese, written Japanese (I'm not exactly certain what writings this includes, but I'd imagine things like text books and scholarly works) is rather archaic. By my understanding (keeping in mind that I'm hardly an expert on Japanese), written Japanese is to spoken Japanese what Old English or Middle English (somewhere between the two, to be precise) is to modern English. It's somewhat similar to spoken Japanese, but there's a lot of difference in things like grammatical suffixes (like using the -nu suffix in the written language to indicate a negative, but using the adjective 'nai' in the spoken language) - enough that it would take some effort for someone who only knew spoken Japanese to figure out what was being said in something using written Japanese. I suppose you could argue that modern English retains some archaic spellings in the written form as well (like writing 'knight', even though it's said more like 'nite'), but this generally doesn't affect grammar so much, apart from some common contractions like 'wanna' and 'dunno'.
Now, getting back to Japanese adjectives. Historically, there are five distinct base forms of Japanese verbs, two of which are of concern to us, here: the predicative, the form used for the main verb of the sentence; and the attributive/substantival, used for the verb/adjective of relative clauses and when using a verb/adjective as a noun. One thing that differs between spoken and written Japanese is that spoken Japanese has replaced the predicative form of verbs and adjectives with the attributive form.
Going back to our examples from last time (and noting that we're now into the realm of written Japanese, my knowledge of which rates just above nonexistent), "the dog is bad" would be written "inu wa warushi" (predicative form), while "bad dog"/"dog that is bad" would be "waruki inu" (attributive form; note also that 'waruki' has become 'warui' in the spoken language). Both predicative and attributive forms are the same for 'kamitsuku' in modern written Japanese, so I'll use a different verb for the example. "the dog eats" would be "inu wa taburu", and "eating dog"/"dog that is eating" would be "tabu inu" (note that the verb has changed to 'taberu' in spoken Japanese).
I've done a tiny bit of looking into the matter in Korean, and it appears there is also a difference in predicative and attributive forms there, as well. This is an interesting piece of information, as it indicates that in Korean (and in older Japanese) this predicative/attributive difference took the place of the relative pronouns in English and other Indo-European languages ('that', 'which', etc.). Of course spoken Japanese has lost this interesting trait, replacing it with analytical methods, where the position of the verb/adjective alone indicates whether it's the main verb of a sentence or the verb of a relative clause.
Moving on to a different but related topic. Unfortunately, as attractive to me as the idea of a language where verbs and adjectives are one in the same is, there is some evidence in Japanese to the contrary, and this characteristic seems to me to be something Japanese (and by correlation Korean) evolved, rather than initially possessed in the language's original form (although it's not impossible that Japanese merely lost such an initial form and later reconstructed the functionality through different means; unlikely, but not impossible).
Taking a step backwards in time, we find that there are five bases to Japanese verbs: the predicative, the attributive, the conjunctive/adverbial (used as an adverb or to express things such as "he bled and died", the imperfect (used for various compound forms), and the perfect (an action that occurred in the past which determines the present state; an example from the book is "Nara no wagie wo wasureteomoe ya" - "have I forgotten my home, Nara?"). Using the verb 'shinu' - 'die' (this is actually an irregular verb, but there is evidence that it is actually the original, true verb conjugation), the forms are 'shinu', 'shinuru', 'shini', 'shina', and 'shinure' (note that two of them appear to be compound forms formed by adding a form of 'uru' - another verb meaning 'exist').
Now, compare those forms to the conjugation of the adjective; using the adjective with the root 'taka' ('high'): 'takashi' (predicative), 'takaki' (attributive/substantival), 'takaku' (conjunctive/adverbial), and 'takakere' (perfect). We can see three things, here. First, there is no imperfect form of the adjective; second, all of the adjective forms appear to be compound forms, with various suffixes tacked on to the root; lastly, three of the four conjugations have a 'k' in their suffix.
While this does not answer the question conclusively, to me it appears that initially adjectives were true adjectives (either that or nouns), consisting merely of the root, and over time were made to resemble verbs by the addition of various suffixes. This is in agreement with the fact that, in the oldest known Japanese writings (from about 800 AD), in addition to the forms just shown, we occasionally see the use of the root alone as an adjective or a noun (this fact brought to you by the same book).
Search This Blog
Saturday, June 30, 2007
Tuesday, June 26, 2007
Slashdot: The GPL vs. DRM
I suppose if I'm posting my posts on Slashdot, I might as well post the other, as well (going by the theory that it's marginally better than no blog posts at all :P). The post I was replying to (though you may have to read further up to get a feel for the context):
My response:If you don't want to respect their license, that's fine, but then you shouldn't expect them to respect the GPL either.There's an inherent difference here. Microsoft's licenses try to restrict you from doing things you would otherwise have the right to do. The GPL gives you rights to do things that you would not otherwise have. If you don't want to respect the GPL, that's fine, but you'd essentially be a software pirate if you distribute GPL software in violation of its terms. On the flip side, if you violate some of Microsoft's license terms, you might not have done anything illegal at all (running Vista in a VM, for instance). So I really do see a huge difference between the two licensing models, and therefore a difference between the nature of respect for them.
That is a beautiful piece of logic you have there. If you violate the terms of MS' license, you're okay, because they were artificial and arbitrary restrictions, anyway. If you violate the GPL's equally artificial and arbitrary limitations, you're a pirate and a lawbreaker, because you've violated the terms of the license. See how absurd it is?
Now, I'm a programmer. I've recently been working on releasing a couple of my programs as open source, so I've had to take a good look at the various licenses, and see which one is closest to my ideals. Just about anything but the BSD license (and arguably even that, though that would almost be splitting hairs) is indistinguishable from DRM, save for one exception: most open-source licenses attempt to achieve maximal collective benefit (rights), while DRM seeks nothing more than to maximize the benefit (profit) of the creators. That is, DRM and source licenses both prevent you from doing things with the code/media that you would otherwise be able to do; if you think differently, you surely have given up the term "DRM" in favor of "consumer enablement" (which it actually looks like you have, from your post).
The CDDL, the license closest to my ideals, is based on a single restriction: that if you modify the open code, you have to keep the CDDL for your changes, keeping the work open; so long as this rule is followed, you can use the code in any way, in any project. This is an arbitrary restriction on the ability of other people to use my code. However, I justify this restriction with the reasoning that I want as many people as possible to be able to make use of my code (and thus any advances to it). I'm sacrificing the ability of individuals to use my code in an unrestricted manner for the calculated benefit of the whole programming community.
While the GPL does this as well, it does something else that I consider uselessly arbitrary (that is, it limits the freedom of users without contributing significantly to the common good) and, for that reason, particularly obnoxious. Anyone who's read the GPL knows what I'm referring to: the requirement that any project which so much as uses GPL code must itself be GPL in its entirety. This is a political rather than practical requirement: the GPL serves to promote free software, and will restrict the freedom of users to attempt to increase the amount of free code available in total. I'd imagine the reasoning is that if all software were free and open, the world would be a better place; but I can't really agree with the sentiment or the means used to achieve it. The LGPL is better, but not as close as CDDL to my ideals (if you want more info on the topic, I wrote a several-page justification of my choice of license on my blog).
My Stance on the RIAA/MPAA
Well, I started out writing a reply to a post on Slashdot. But by the time I finished (some hour later), I'd written something as long as one of my typical lengthy blog posts. So I figured I might as well post it on my blog :P
First, the original post I was replying to (from this thread):
First, the original post I was replying to (from this thread):
So its pretty clear that going after individual copyright violaters is looked down upon on slashdot. I also remember back when napster was big everyone on here was upset that they were getting sued because they werent actually breaking the law, it was just the individual users.And my reply:
So is it just one group who thinks that indivuals shouldnt be sued and a different group who thinks that the companies should be immune? How should the RIAA protect its intellectual property rights? Is it just a fundamental belief on here that copyright holders should have no recourse against violaters?
I'm not sure they can defend themselves, frankly. I guess I differ from most of the most vocal Slashdot people in that I believe P2P file sharing (specifically a subset of that - theft) is hurting the music and movie industries, and I believe that stealing content via P2P is morally wrong. That said, I oppose their suits on a number of grounds (possibly more than I can recall off the top of my head); to name a few:
- Even in the worst case scenario - that the downloader and everyone they uploaded to are truly stealing the content (morally wrong), the suits they are bringing are orders of magnitude greater than the theft committed. $3k-$5k out-of-court settlements are nearly universal; tet most people will not upload much more than the amount they downloaded, and even in extreme cases not more than several times what they downloaded. So, if they downloaded a music CD worth $15, they have (assuming all parties are stealing) stolen (or willingly facilitated stealing) 2x-5x that (5x is a bit arbitrary), or $30-$75. Even at the high end of theft ($75) and low end of damages ($3k), that is *40 times* the value of that which was stolen; the other way around ($30 and $5k), it's almost 200 times the value. Can you honestly tell me that you support that? No, really, I want you to type in your answer and click "submit".
- The RIAA has been using downright illegal tactics to bring these cases (see Beckerman's site for details about this). And I'm not throwing the term "illegal" around lightly, like many Slashdotters. I mean they are literally defying direct orders made *by the courts* in order to bring law suits that they would not have been able to bring using purely legal methods.
- The RIAA has been using legal but immoral tactics to bring these cases. Most commonly, the RIAA picks people who they believe do not have the resources to defend against the suit, making the outcome the same, regardless of whether they are guilty or innocent (and short-circuiting the justice system entirely): they have to settle, because they can't afford a lawyer to defend themselves (and if they do hire a lawyer, the RIAA lets them accumulate a sizable bill, then drops the case, so the people will usually have to pay more than the settlement would have been). This is done to build fear by maintaining a perfect prosecution record, to discourage others from sharing files or hiring a lawyer to defend themselves. I know of not a single case the RIAA has won - that is, the case has gone to verdict and the verdict was in their favor; if you know of a single case, click "reply" and link to it, right now. And because they drop cases once it becomes obvious they cannot win them, they have just the same never suffered a loss (though that tide appears to be slowly turning). They care so much about their perfect record that they will continue suits against people who they either knew from the beginning or learned during the trial are innocent, because dropping the case would show that they don't have absolute power, and diminish the fear they seek to create. Do you support each and every one of these methods? Again, that is not a rhetorical question; I am expecting a real answer.
- The RIAA needs to preserve their undefeated status because they cannot possibly catch all of (or even enough to save their business) the true P2P thieves out there, let alone distinguish the thieves from the ones who have not immoral uses (such as seeing if they want to buy and album/movie legally). This is a painful exercise in futility. Their only (false) hope is to try to generate enough fear to make everyone they don't have resources to sue stop sharing. This is *not* working, and cannot in theory work. While it's true that they may be able to reduce (not stop) theft via P2P in the US by going after web sites and individual P2P users, where they have the power of law suits, such suits cannot be brought in a majority of the world because they do not have jurisdiction (let alone the resources to sue that many people), meaning that even fear is outside their reach.
- DRM is also a painful exercise in futility, and cannot stop people from making or sharing copies of media for any purpose, including true theft. Furthermore, it provides no inhibition for hackers who seek to make the content available (again, for any reason), but greatly inconveniences the casual user who isn't technically inclined, and wouldn't share the content anyway. Nevertheless, such a technically illiterate user will have no trouble downloading media ripped by an aforementioned professional hacker, meaning all that inconvenience is for no benefit whatsoever.
- Even in the most draconian of outcomes - that the RIAA gets the government itself on their side and using government resources to hunt down copyright infringers, that only goes as far as the US, and we've already been over that.
Thus, in conclusion, the RIAA/MPAA are on a sinking ship, and even if they were on the moral side (that is, they didn't have any of the moral or legal problems I've outlined), I don't believe they could stop it. People throw around this "time to find a new business model" all the time on Slashdot, usually because they believe there's something wrong with the old one. I don't know if I'd agree that the old one is wrong (or that it should be abolished), but I think what I think and what the RIAA/MPAA think is irrelevant: the end is coming - you adapt (and perhaps get "right-sized") or you die.
Lastly, I should add that I'm not categorically opposed to what the RIAA/MPAA try to accomplish. I think it's great news when they and/or the government busts large-scale counterfeiting rings, and I'm certainly not one of those "Who cares, music should be free" hippies (that's an actual quote from Slashdot, by the way). And for what it's worth, I'm a big fan of watermarking content. Ideally, they should be going after the people who make the content (widely) available - counterfeiters and the ones who initially rip the content (as they make it available for many, many people, potentially creating huge damages proportionate the their own numbers; as well, it's much less morally defensible to rip copyrighted content, as you know you will personally fuel a huge amount of theft, in the case of sharers without so much as some benefit to yourself), not the ones who download the content, which, as stated earlier, may not be immoral, depending on the context.
Monday, June 04, 2007
Adjectives, English, and Japanese
While in the shower today, I was struck by a random though (this happens very often) for a blog post.
In English, we have relatively clearly defined categories for words, the main ones being nouns, adjectives, adverbs, and verbs. Of course, there are fundamental processes for changing words from one class to another; verbs may become nouns ('run' -> 'running'); adjectives may become adverbs ('quick' -> 'quickly'); nouns may become adjectives ('beauty' -> 'beautiful'); and in processes that I'm not entirely sure are grammatically correct, verbs may become adjectives ('castrate' + 'sledgehammer' -> 'castrating sledgehammer') and nouns may become verbs (e.g. "Verbing weirds languages" - Calvin). However, there are lots of ways of designing languages that don't have such clear distinctions. For the sake of keeping this relatively brief, I'm going to restrict this post to dealing with adjectives.
To the best of my understanding, Chinese does not have a clear distinction between nouns, verbs, and adjectives. Chinese is a strongly analytic language: the position of a word in a sentence and other words around it determine what role it plays. Japanese borrowed many of these Chinese words, and uses them also in all three categories.
For a well-known example, the word 'baka'. Used alone, it could mean 'idiot', 'mistake', 'stupidity', or something trivial. However when combined with 'na' (a particle which makes a noun act like an adjective), it becomes an adjective (e.g. 'baka na inu' - 'stupid dog'; not to be confused with 'inu no baka' - 'dog's stupidity'). This is not so unlike English, as sometimes you can use 'of' in basically the same way the Japanese use 'na'. 'yakusoku' is similar. Used alone, it means 'promise', 'arrangement', or 'rule'. But when accompanied by the auxiliary verb 'suru', it becomes a verb (e.g. 'yakusoku suru' - 'to promise').
However, that's mainly used for Chinese loan-words and compound words (and I'd guess that's how you'd do verbing in Japanese). The Japanese language itself has a different mechanism for adjectives. Pure Japanese (ignoring words derived from Chinese) does not have (nor need) true adjectives. Rather, what we would consider adjectives are actually intransitive verbs. For example, 'warui' means 'to be bad/evil/inferior/unprofitable/wrong/at fault/sorry'. These 'adjectives' are conjugated in principle the same as verbs, but the suffixes are different (though there's some evidence from ancient writings that at one time they were the same; I believe they still are the same in Korean, a sister language of Japanese). So, if we wanted to say, for example, 'the dog is bad', we would say 'inu wa warui' (in Japanese the verb comes at the end of the sentence; 'wa' indicates the topic of the sentence).
However, it's possible for such words to also be used as adjectives. For example, 'warui inu' would mean something like 'bad dog'. How do we account for that? The nature of the Japanese and Korean languages accounts for this rather beautifully, actually. The key, here, is how they construct sentences and clauses. Japanese has no relative pronouns ('that', etc.; e.g. 'dog that loves Kaity'); Japanese relative clauses are formed by putting the clause in front of the noun the clause relates to. So, for example, 'dog which bites' would be 'kamitsuku inu' ('kamitsuku' is the verb 'bite'); actually, this, like much of the Japanese language, is a bit ambiguous, as it could also mean 'dog that is bitten'. Thus 'warui inu' would literally mean 'dog which is bad', having the same meaning as 'bad dog' (as we would use in English, rather than the relative clause form).
You'll notice that English has actually developed something similar. Consider the term I coined - 'holy castrating sledgehammer'. In this case, what is meant is 'sledgehammer which castrates'. Here, we use a different form of the verb to make it into an adjective, but the basic concept is the same.
In English, we have relatively clearly defined categories for words, the main ones being nouns, adjectives, adverbs, and verbs. Of course, there are fundamental processes for changing words from one class to another; verbs may become nouns ('run' -> 'running'); adjectives may become adverbs ('quick' -> 'quickly'); nouns may become adjectives ('beauty' -> 'beautiful'); and in processes that I'm not entirely sure are grammatically correct, verbs may become adjectives ('castrate' + 'sledgehammer' -> 'castrating sledgehammer') and nouns may become verbs (e.g. "Verbing weirds languages" - Calvin). However, there are lots of ways of designing languages that don't have such clear distinctions. For the sake of keeping this relatively brief, I'm going to restrict this post to dealing with adjectives.
To the best of my understanding, Chinese does not have a clear distinction between nouns, verbs, and adjectives. Chinese is a strongly analytic language: the position of a word in a sentence and other words around it determine what role it plays. Japanese borrowed many of these Chinese words, and uses them also in all three categories.
For a well-known example, the word 'baka'. Used alone, it could mean 'idiot', 'mistake', 'stupidity', or something trivial. However when combined with 'na' (a particle which makes a noun act like an adjective), it becomes an adjective (e.g. 'baka na inu' - 'stupid dog'; not to be confused with 'inu no baka' - 'dog's stupidity'). This is not so unlike English, as sometimes you can use 'of' in basically the same way the Japanese use 'na'. 'yakusoku' is similar. Used alone, it means 'promise', 'arrangement', or 'rule'. But when accompanied by the auxiliary verb 'suru', it becomes a verb (e.g. 'yakusoku suru' - 'to promise').
However, that's mainly used for Chinese loan-words and compound words (and I'd guess that's how you'd do verbing in Japanese). The Japanese language itself has a different mechanism for adjectives. Pure Japanese (ignoring words derived from Chinese) does not have (nor need) true adjectives. Rather, what we would consider adjectives are actually intransitive verbs. For example, 'warui' means 'to be bad/evil/inferior/unprofitable/wrong/at fault/sorry'. These 'adjectives' are conjugated in principle the same as verbs, but the suffixes are different (though there's some evidence from ancient writings that at one time they were the same; I believe they still are the same in Korean, a sister language of Japanese). So, if we wanted to say, for example, 'the dog is bad', we would say 'inu wa warui' (in Japanese the verb comes at the end of the sentence; 'wa' indicates the topic of the sentence).
However, it's possible for such words to also be used as adjectives. For example, 'warui inu' would mean something like 'bad dog'. How do we account for that? The nature of the Japanese and Korean languages accounts for this rather beautifully, actually. The key, here, is how they construct sentences and clauses. Japanese has no relative pronouns ('that', etc.; e.g. 'dog that loves Kaity'); Japanese relative clauses are formed by putting the clause in front of the noun the clause relates to. So, for example, 'dog which bites' would be 'kamitsuku inu' ('kamitsuku' is the verb 'bite'); actually, this, like much of the Japanese language, is a bit ambiguous, as it could also mean 'dog that is bitten'. Thus 'warui inu' would literally mean 'dog which is bad', having the same meaning as 'bad dog' (as we would use in English, rather than the relative clause form).
You'll notice that English has actually developed something similar. Consider the term I coined - 'holy castrating sledgehammer'. In this case, what is meant is 'sledgehammer which castrates'. Here, we use a different form of the verb to make it into an adjective, but the basic concept is the same.
Saturday, June 02, 2007
The Predicative
Next up on the list of ongoing changes to the English language is something that has become so uncommon that many native English speakers don't even know about it - an indication that the loss of it is at a very late stage, making it almost certain that it will eventually completely disappear from English. Consequently, I'll have to take a bit to explain it.
Let's suppose, for the sake of explaining this, that English had the distinction between subjective and objective case for nouns and adjectives, as it still does with pronouns. For a simple sentence, such as "The dog has a bone", it's clear that "the dog" is in the subjective case and "a bone" is in the objective case, as they are the subject and direct object, respectively. Adjectives are also pretty intuitive. In "The stupid dog has a bone", you'd probably guess (correctly) that "stupid" would also be in the subjective case; whereas in "The dog has a big bone", "big" would be in the objective case.
Let's make it slightly harder. In "Biggin's, my dog, has a bone", what do you suppose the case of "dog" is ("my" is clearly in the genitive case, and "Biggin's" and "a bone" are also obvious)? We might imagine a language which features an elaborative case - a case used for stating information elaborating on a previous noun; however, I don't know of any such languages, and Indo-European languages certainly are not. In Indo-European languages, elaborations such as this, similar to adjectives, agree in with the thing they are elaborating. So here, it would be the subjective case. This becomes even more intuitive when you consider that "My dog Biggin's has a bone" means exactly the same thing.
Given that information, you should have no trouble figuring out the cases of nouns and adjectives in "Kaity, my fat cat, likes Biggin's". But suppose we have "Kaity is a fat cat"; now what? "fat cat" is acting as the direct object of the verb, so you might think it would be in the objective case. While this is very common these days, it's wrong. "fat cat" is called a predicative - a noun or adjective phrase in a descriptive sentence that elaborates the subject - and is a special case of the rule stated in the last paragraph; in Indo-European languages, predicatives agree with the subject - the thing they are elaborating.
Honestly, even I don't know how to use predicatives correctly 100% of the time (and I actually do use them correctly even less, as they tend to sound archaic to modern English speakers; which sounds better - "That would be I" or "That would be me"?). Specifically, I don't know exactly how many verbs they apply to. I know they apply to any sentence of the form "[something] is [some noun, adjective, or pronoun]". Same with some verbs of perception, such as "appear". I believe they also apply to some other cases, where there is a comparison of two things with regard to a particular adjective, but I don't know the rules for that very well.
Anyway, the reason this is being lost should be obvious: English has not had a distinction between subjective and objective case for nouns and adjectives for a good 1000 years. This makes it much less used now than it was then, as it's rare for there to be an elaborating pronoun in the predicate. And as with all things related to language, the less frequently something is used, the more likely it is to mutate or be lost (for example notice has the verb "be" has retained far more peculiar conjugations than any other verb in the English language; that is because it is the most common verb in the language).
Occasionally, you'll see someone use the subjective case for one of the verb objects or a prepositional object, even though the noun should be in the objective case. This is often a case of hyper-correction. A person who is not familiar with predicatives may hear them used, perhaps at some high society gathering, and attempt to imitate them to sound more educated than they actually are. Naturally, as they don't understand the rules for predicatives, they end up misapplying them, with incorrect and awkward-sounding results.
Let's suppose, for the sake of explaining this, that English had the distinction between subjective and objective case for nouns and adjectives, as it still does with pronouns. For a simple sentence, such as "The dog has a bone", it's clear that "the dog" is in the subjective case and "a bone" is in the objective case, as they are the subject and direct object, respectively. Adjectives are also pretty intuitive. In "The stupid dog has a bone", you'd probably guess (correctly) that "stupid" would also be in the subjective case; whereas in "The dog has a big bone", "big" would be in the objective case.
Let's make it slightly harder. In "Biggin's, my dog, has a bone", what do you suppose the case of "dog" is ("my" is clearly in the genitive case, and "Biggin's" and "a bone" are also obvious)? We might imagine a language which features an elaborative case - a case used for stating information elaborating on a previous noun; however, I don't know of any such languages, and Indo-European languages certainly are not. In Indo-European languages, elaborations such as this, similar to adjectives, agree in with the thing they are elaborating. So here, it would be the subjective case. This becomes even more intuitive when you consider that "My dog Biggin's has a bone" means exactly the same thing.
Given that information, you should have no trouble figuring out the cases of nouns and adjectives in "Kaity, my fat cat, likes Biggin's". But suppose we have "Kaity is a fat cat"; now what? "fat cat" is acting as the direct object of the verb, so you might think it would be in the objective case. While this is very common these days, it's wrong. "fat cat" is called a predicative - a noun or adjective phrase in a descriptive sentence that elaborates the subject - and is a special case of the rule stated in the last paragraph; in Indo-European languages, predicatives agree with the subject - the thing they are elaborating.
Honestly, even I don't know how to use predicatives correctly 100% of the time (and I actually do use them correctly even less, as they tend to sound archaic to modern English speakers; which sounds better - "That would be I" or "That would be me"?). Specifically, I don't know exactly how many verbs they apply to. I know they apply to any sentence of the form "[something] is [some noun, adjective, or pronoun]". Same with some verbs of perception, such as "appear". I believe they also apply to some other cases, where there is a comparison of two things with regard to a particular adjective, but I don't know the rules for that very well.
Anyway, the reason this is being lost should be obvious: English has not had a distinction between subjective and objective case for nouns and adjectives for a good 1000 years. This makes it much less used now than it was then, as it's rare for there to be an elaborating pronoun in the predicate. And as with all things related to language, the less frequently something is used, the more likely it is to mutate or be lost (for example notice has the verb "be" has retained far more peculiar conjugations than any other verb in the English language; that is because it is the most common verb in the language).
Occasionally, you'll see someone use the subjective case for one of the verb objects or a prepositional object, even though the noun should be in the objective case. This is often a case of hyper-correction. A person who is not familiar with predicatives may hear them used, perhaps at some high society gathering, and attempt to imitate them to sound more educated than they actually are. Naturally, as they don't understand the rules for predicatives, they end up misapplying them, with incorrect and awkward-sounding results.
Subscribe to:
Posts (Atom)