Even though physiological and behavioral processes are maximized within relatively narrow ranges of temperatures in amphibians and reptiles, individuals may not maintain activity at the optimum temperatures for performance because of the costs associated with doing so. Alternatively, activity can occur at suboptimal temperatures even when the costs are great. Theoretically, costs of activity at suboptimal temperatures must be balanced by gains of being active. For instance, the leatherback sea turtle will hunt during the time of day in which krill are abundant, even though the water is cooler and thus the turtle's body temperature requires greater metabolic activity. In general, however, the cost of keeping a suboptimal body temperature, for reptiles and amphibians, is varied and not well understood; they include risk of predation, reduced performance, and reduced foraging success.
One reptile that scientists understand better is the desert lizard, which is active during the morning at relatively low body temperatures (usually 33.0 C), inactive during midday when external temperatures are extreme, and active in the evening at body temperatures of 37.0 C. Although the lizards engage in similar behavior (e.g., in morning and afternoon, social displays, movements, and feeding), metabolic rates and water loss are great and sprint speed is lower in the evening when body temperatures are high. Thus, the highest metabolic and performance costs of activity occur in the evening when lizards have high body temperatures. However, males that are active late in the day apparently have a higher mating success resulting from their prolonged social encounters. The costs of activity at temperatures beyond those optimal for performance are offset by the advantages gained by maximizing social interactions that ultimately impact individual fitness.
Even though physiological and behavioral processes are maximized within relatively narrow ranges of temperatures in amphibians and reptiles, individuals may not maintain activity at the optimum temperatures for performance because of the costs associated with doing so. Alternatively, activity can occur at suboptimal temperatures even when the costs are great. Theoretically, costs of activity at suboptimal temperatures must be balanced by gains of being active. For instance, the leatherback sea turtle will hunt during the time of day in which krill are abundant, even though the water is cooler and thus the turtle's body temperature requires greater metabolic activity. In general, however, the cost of keeping a suboptimal body temperature, for reptiles and amphibians, is varied and not well understood; they include risk of predation, reduced performance, and reduced foraging success.
One reptile that scientists understand better is the desert lizard, which is active during the morning at relatively low body temperatures (usually 33.0 C), inactive during midday when external temperatures are extreme, and active in the evening at body temperatures of 37.0 C. Although the lizards engage in similar behavior (e.g., in morning and afternoon, social displays, movements, and feeding), metabolic rates and water loss are great and sprint speed is lower in the evening when body temperatures are high. Thus, the highest metabolic and performance costs of activity occur in the evening when lizards have high body temperatures. However, males that are active late in the day apparently have a higher mating success resulting from their prolonged social encounters. The costs of activity at temperatures beyond those optimal for performance are offset by the advantages gained by maximizing social interactions that ultimately impact individual fitness.
Even though physiological and behavioral processes are maximized within relatively narrow ranges of temperatures in amphibians and reptiles, individuals may not maintain activity at the optimum temperatures for performance because of the costs associated with doing so. Alternatively, activity can occur at suboptimal temperatures even when the costs are great. Theoretically, costs of activity at suboptimal temperatures must be balanced by gains of being active. For instance, the leatherback sea turtle will hunt during the time of day in which krill are abundant, even though the water is cooler and thus the turtle's body temperature requires greater metabolic activity. In general, however, the cost of keeping a suboptimal body temperature, for reptiles and amphibians, is varied and not well understood; they include risk of predation, reduced performance, and reduced foraging success.
One reptile that scientists understand better is the desert lizard, which is active during the morning at relatively low body temperatures (usually 33.0 C), inactive during midday when external temperatures are extreme, and active in the evening at body temperatures of 37.0 C. Although the lizards engage in similar behavior (e.g., in morning and afternoon, social displays, movements, and feeding), metabolic rates and water loss are great and sprint speed is lower in the evening when body temperatures are high. Thus, the highest metabolic and performance costs of activity occur in the evening when lizards have high body temperatures. However, males that are active late in the day apparently have a higher mating success resulting from their prolonged social encounters. The costs of activity at temperatures beyond those optimal for performance are offset by the advantages gained by maximizing social interactions that ultimately impact individual fitness.
Originally, scientists predicted small asteroids to be hard and rocky, as any loose surface material (called regolith) generated by impacts was expected to escape their weak gravity. Aggregate small bodies were not thought to exist, because the slightest sustained relative motion would cause them to separate. But observations and computer modeling are proving otherwise. Most asteroids larger than a kilometer are now believed to be composites of smaller pieces. Those imaged at high-resolution show evidence for copious regolith despite the weak gravity. Most of them have one or more extraordinarily large craters, some of which are wider than the mean ra dius of the whole body. Such colossal impacts would not just gouge out a crater—they would break any monolithic body into pieces. In short, asteroids larger than a kilometer across may look like nuggets of hard rock but are more likely to be aggregate assemblages—or even piles of loose rubble so pervasively fragmented that no solid bedrock is left.
The rubble hypothesis, proposed decades ago by scientists, lacked evidence, until the planetologist Schumaker realized that the huge craters on the asteroid Mathilde and its very low density could only make sense together: a porous body such as a rubble pile can withstand a battering much better than an integral object. It will absorb and dissipate a large fraction of the energy of an impact; the far side might hardly feel a thing. At first, the rubble hypothesis may appear conceptually troublesome. The material strength of an asteroid is nearly zero, and the gravity is so low one is tempted to neglect that too. The truth is neither strength nor gravity can be ignored. Paltry though it may be, gravity binds a rubble pile together. And anybody who builds sandcastles knows that even loose debris can cohere. Oft-ignored details of motion begin to matter: sliding friction, chemical bonding, damping of kinetic energy, etc. We are just beginning to fathom the subtle interplay of these minuscule forces.
The size of an asteroid should determine which force dominates. One indication is the observed pattern of asteroidal rotation rates. Some collisions cause an asteroid to spin faster; others slow it down. If asteroids are monolithic rocks undergoing random collisions, a graph of their rotation rates should show a bell-shaped distribution with a statistical “tail” of very fast rotators. If nearly all asteroids are rubble piles, however, this tail would be missing, because any rubble pile spinning faster than once every two or three hours fly apart. Recently, several astronomers discovered that all but five observed asteroids obey a strict rotation limit. The exceptions are all smaller than about 150 meters in diameter, with an abrupt cutoff for asteroids larger than 200 meters. The evident conclusion—that asteroids larger than 200 meters across are rubble piles—agrees with recent computer modeling of collisions. A collision can blast a large asteroid to bits, but those bits will usually be moving slower than their mutual escape velocity (the lowest velocity that a body must have in order to escape the orbit of a planet). Over several hours, gravity will reassemble all but the fastest pieces into a rubble pile.
Originally, scientists predicted small asteroids to be hard and rocky, as any loose surface material (called regolith) generated by impacts was expected to escape their weak gravity. Aggregate small bodies were not thought to exist, because the slightest sustained relative motion would cause them to separate. But observations and computer modeling are proving otherwise. Most asteroids larger than a kilometer are now believed to be composites of smaller pieces. Those imaged at high-resolution show evidence for copious regolith despite the weak gravity. Most of them have one or more extraordinarily large craters, some of which are wider than the mean ra dius of the whole body. Such colossal impacts would not just gouge out a crater—they would break any monolithic body into pieces. In short, asteroids larger than a kilometer across may look like nuggets of hard rock but are more likely to be aggregate assemblages—or even piles of loose rubble so pervasively fragmented that no solid bedrock is left.
The rubble hypothesis, proposed decades ago by scientists, lacked evidence, until the planetologist Schumaker realized that the huge craters on the asteroid Mathilde and its very low density could only make sense together: a porous body such as a rubble pile can withstand a battering much better than an integral object. It will absorb and dissipate a large fraction of the energy of an impact; the far side might hardly feel a thing. At first, the rubble hypothesis may appear conceptually troublesome. The material strength of an asteroid is nearly zero, and the gravity is so low one is tempted to neglect that too. The truth is neither strength nor gravity can be ignored. Paltry though it may be, gravity binds a rubble pile together. And anybody who builds sandcastles knows that even loose debris can cohere. Oft-ignored details of motion begin to matter: sliding friction, chemical bonding, damping of kinetic energy, etc. We are just beginning to fathom the subtle interplay of these minuscule forces.
The size of an asteroid should determine which force dominates. One indication is the observed pattern of asteroidal rotation rates. Some collisions cause an asteroid to spin faster; others slow it down. If asteroids are monolithic rocks undergoing random collisions, a graph of their rotation rates should show a bell-shaped distribution with a statistical “tail” of very fast rotators. If nearly all asteroids are rubble piles, however, this tail would be missing, because any rubble pile spinning faster than once every two or three hours fly apart. Recently, several astronomers discovered that all but five observed asteroids obey a strict rotation limit. The exceptions are all smaller than about 150 meters in diameter, with an abrupt cutoff for asteroids larger than 200 meters. The evident conclusion—that asteroids larger than 200 meters across are rubble piles—agrees with recent computer modeling of collisions. A collision can blast a large asteroid to bits, but those bits will usually be moving slower than their mutual escape velocity (the lowest velocity that a body must have in order to escape the orbit of a planet). Over several hours, gravity will reassemble all but the fastest pieces into a rubble pile.
Originally, scientists predicted small asteroids to be hard and rocky, as any loose surface material (called regolith) generated by impacts was expected to escape their weak gravity. Aggregate small bodies were not thought to exist, because the slightest sustained relative motion would cause them to separate. But observations and computer modeling are proving otherwise. Most asteroids larger than a kilometer are now believed to be composites of smaller pieces. Those imaged at high-resolution show evidence for copious regolith despite the weak gravity. Most of them have one or more extraordinarily large craters, some of which are wider than the mean ra dius of the whole body. Such colossal impacts would not just gouge out a crater—they would break any monolithic body into pieces. In short, asteroids larger than a kilometer across may look like nuggets of hard rock but are more likely to be aggregate assemblages—or even piles of loose rubble so pervasively fragmented that no solid bedrock is left.
The rubble hypothesis, proposed decades ago by scientists, lacked evidence, until the planetologist Schumaker realized that the huge craters on the asteroid Mathilde and its very low density could only make sense together: a porous body such as a rubble pile can withstand a battering much better than an integral object. It will absorb and dissipate a large fraction of the energy of an impact; the far side might hardly feel a thing. At first, the rubble hypothesis may appear conceptually troublesome. The material strength of an asteroid is nearly zero, and the gravity is so low one is tempted to neglect that too. The truth is neither strength nor gravity can be ignored. Paltry though it may be, gravity binds a rubble pile together. And anybody who builds sandcastles knows that even loose debris can cohere. Oft-ignored details of motion begin to matter: sliding friction, chemical bonding, damping of kinetic energy, etc. We are just beginning to fathom the subtle interplay of these minuscule forces.
The size of an asteroid should determine which force dominates. One indication is the observed pattern of asteroidal rotation rates. Some collisions cause an asteroid to spin faster; others slow it down. If asteroids are monolithic rocks undergoing random collisions, a graph of their rotation rates should show a bell-shaped distribution with a statistical “tail” of very fast rotators. If nearly all asteroids are rubble piles, however, this tail would be missing, because any rubble pile spinning faster than once every two or three hours fly apart. Recently, several astronomers discovered that all but five observed asteroids obey a strict rotation limit. The exceptions are all smaller than about 150 meters in diameter, with an abrupt cutoff for asteroids larger than 200 meters. The evident conclusion—that asteroids larger than 200 meters across are rubble piles—agrees with recent computer modeling of collisions. A collision can blast a large asteroid to bits, but those bits will usually be moving slower than their mutual escape velocity (the lowest velocity that a body must have in order to escape the orbit of a planet). Over several hours, gravity will reassemble all but the fastest pieces into a rubble pile.
Originally, scientists predicted small asteroids to be hard and rocky, as any loose surface material (called regolith) generated by impacts was expected to escape their weak gravity. Aggregate small bodies were not thought to exist, because the slightest sustained relative motion would cause them to separate. But observations and computer modeling are proving otherwise. Most asteroids larger than a kilometer are now believed to be composites of smaller pieces. Those imaged at high-resolution show evidence for copious regolith despite the weak gravity. Most of them have one or more extraordinarily large craters, some of which are wider than the mean ra dius of the whole body. Such colossal impacts would not just gouge out a crater—they would break any monolithic body into pieces. In short, asteroids larger than a kilometer across may look like nuggets of hard rock but are more likely to be aggregate assemblages—or even piles of loose rubble so pervasively fragmented that no solid bedrock is left.
The rubble hypothesis, proposed decades ago by scientists, lacked evidence, until the planetologist Schumaker realized that the huge craters on the asteroid Mathilde and its very low density could only make sense together: a porous body such as a rubble pile can withstand a battering much better than an integral object. It will absorb and dissipate a large fraction of the energy of an impact; the far side might hardly feel a thing. At first, the rubble hypothesis may appear conceptually troublesome. The material strength of an asteroid is nearly zero, and the gravity is so low one is tempted to neglect that too. The truth is neither strength nor gravity can be ignored. Paltry though it may be, gravity binds a rubble pile together. And anybody who builds sandcastles knows that even loose debris can cohere. Oft-ignored details of motion begin to matter: sliding friction, chemical bonding, damping of kinetic energy, etc. We are just beginning to fathom the subtle interplay of these minuscule forces.
The size of an asteroid should determine which force dominates. One indication is the observed pattern of asteroidal rotation rates. Some collisions cause an asteroid to spin faster; others slow it down. If asteroids are monolithic rocks undergoing random collisions, a graph of their rotation rates should show a bell-shaped distribution with a statistical “tail” of very fast rotators. If nearly all asteroids are rubble piles, however, this tail would be missing, because any rubble pile spinning faster than once every two or three hours fly apart. Recently, several astronomers discovered that all but five observed asteroids obey a strict rotation limit. The exceptions are all smaller than about 150 meters in diameter, with an abrupt cutoff for asteroids larger than 200 meters. The evident conclusion—that asteroids larger than 200 meters across are rubble piles—agrees with recent computer modeling of collisions. A collision can blast a large asteroid to bits, but those bits will usually be moving slower than their mutual escape velocity (the lowest velocity that a body must have in order to escape the orbit of a planet). Over several hours, gravity will reassemble all but the fastest pieces into a rubble pile.
Compared to regulations in other countries, those of the United States tends to be narrower in scope, with an emphasis on manufacturing processes and specific categories of pollution, and little or no attention to the many other factors that affect environmental quality. An example is the focus on controlling pollution rather than influencing decisions about processes, raw materials, or products that determine environmental impacts. Regulation in the United States tends to isolate specific aspects of production processes and attempts to control them stringently, which means that some aspects of business are regulated tightly, although sometimes not cost-effectively, while others are ignored. Other countries and several American states have recently made more progress in preventing pollution at its source and considering such issues as product life cycles, packaging waste, and industrial energy efficiency.
Environmental regulation in the United States is also more prescriptive than elsewhere, in the sense of requiring specific actions, with little discretion left to the regulated firm. There also is a great reliance on action-forcing laws and technology standards.
These contrasts are illustrated nicely in a 1974 book that used a hare and tortoise analogy to compare air quality regulation in the United States and Sweden. While the United States (the hare) codified ambitious goals in statutes that drove industry to adopt new technologies under the threat of sanctions, Sweden (the tortoise) used a more collaborative process that stressed results but worked with industry in deciding how to achieve them. In the end air quality results were about the same. Similar results have been found in other comparative analyses of environmental regulation. For example, one study of a multinational firm with operations in the United States and Japan found that pollution levels in both countries were similar, despite generally higher pollution abatement expenditures in the United States. The higher costs observed in the United States thus were due in large part, not to more stringent standards, but to the higher regulatory transaction costs. Because agencies in different countries share information about technologies, best practices, and other issues, the pollution levels found acceptable in different countries tend to be quite similar.
Compared to regulations in other countries, those of the United States tends to be narrower in scope, with an emphasis on manufacturing processes and specific categories of pollution, and little or no attention to the many other factors that affect environmental quality. An example is the focus on controlling pollution rather than influencing decisions about processes, raw materials, or products that determine environmental impacts. Regulation in the United States tends to isolate specific aspects of production processes and attempts to control them stringently, which means that some aspects of business are regulated tightly, although sometimes not cost-effectively, while others are ignored. Other countries and several American states have recently made more progress in preventing pollution at its source and considering such issues as product life cycles, packaging waste, and industrial energy efficiency.
Environmental regulation in the United States is also more prescriptive than elsewhere, in the sense of requiring specific actions, with little discretion left to the regulated firm. There also is a great reliance on action-forcing laws and technology standards.
These contrasts are illustrated nicely in a 1974 book that used a hare and tortoise analogy to compare air quality regulation in the United States and Sweden. While the United States (the hare) codified ambitious goals in statutes that drove industry to adopt new technologies under the threat of sanctions, Sweden (the tortoise) used a more collaborative process that stressed results but worked with industry in deciding how to achieve them. In the end air quality results were about the same. Similar results have been found in other comparative analyses of environmental regulation. For example, one study of a multinational firm with operations in the United States and Japan found that pollution levels in both countries were similar, despite generally higher pollution abatement expenditures in the United States. The higher costs observed in the United States thus were due in large part, not to more stringent standards, but to the higher regulatory transaction costs. Because agencies in different countries share information about technologies, best practices, and other issues, the pollution levels found acceptable in different countries tend to be quite similar.
Compared to regulations in other countries, those of the United States tends to be narrower in scope, with an emphasis on manufacturing processes and specific categories of pollution, and little or no attention to the many other factors that affect environmental quality. An example is the focus on controlling pollution rather than influencing decisions about processes, raw materials, or products that determine environmental impacts. Regulation in the United States tends to isolate specific aspects of production processes and attempts to control them stringently, which means that some aspects of business are regulated tightly, although sometimes not cost-effectively, while others are ignored. Other countries and several American states have recently made more progress in preventing pollution at its source and considering such issues as product life cycles, packaging waste, and industrial energy efficiency.
Environmental regulation in the United States is also more prescriptive than elsewhere, in the sense of requiring specific actions, with little discretion left to the regulated firm. There also is a great reliance on action-forcing laws and technology standards.
These contrasts are illustrated nicely in a 1974 book that used a hare and tortoise analogy to compare air quality regulation in the United States and Sweden. While the United States (the hare) codified ambitious goals in statutes that drove industry to adopt new technologies under the threat of sanctions, Sweden (the tortoise) used a more collaborative process that stressed results but worked with industry in deciding how to achieve them. In the end air quality results were about the same. Similar results have been found in other comparative analyses of environmental regulation. For example, one study of a multinational firm with operations in the United States and Japan found that pollution levels in both countries were similar, despite generally higher pollution abatement expenditures in the United States. The higher costs observed in the United States thus were due in large part, not to more stringent standards, but to the higher regulatory transaction costs. Because agencies in different countries share information about technologies, best practices, and other issues, the pollution levels found acceptable in different countries tend to be quite similar.
Compared to regulations in other countries, those of the United States tends to be narrower in scope, with an emphasis on manufacturing processes and specific categories of pollution, and little or no attention to the many other factors that affect environmental quality. An example is the focus on controlling pollution rather than influencing decisions about processes, raw materials, or products that determine environmental impacts. Regulation in the United States tends to isolate specific aspects of production processes and attempts to control them stringently, which means that some aspects of business are regulated tightly, although sometimes not cost-effectively, while others are ignored. Other countries and several American states have recently made more progress in preventing pollution at its source and considering such issues as product life cycles, packaging waste, and industrial energy efficiency.
Environmental regulation in the United States is also more prescriptive than elsewhere, in the sense of requiring specific actions, with little discretion left to the regulated firm. There also is a great reliance on action-forcing laws and technology standards.
These contrasts are illustrated nicely in a 1974 book that used a hare and tortoise analogy to compare air quality regulation in the United States and Sweden. While the United States (the hare) codified ambitious goals in statutes that drove industry to adopt new technologies under the threat of sanctions, Sweden (the tortoise) used a more collaborative process that stressed results but worked with industry in deciding how to achieve them. In the end air quality results were about the same. Similar results have been found in other comparative analyses of environmental regulation. For example, one study of a multinational firm with operations in the United States and Japan found that pollution levels in both countries were similar, despite generally higher pollution abatement expenditures in the United States. The higher costs observed in the United States thus were due in large part, not to more stringent standards, but to the higher regulatory transaction costs. Because agencies in different countries share information about technologies, best practices, and other issues, the pollution levels found acceptable in different countries tend to be quite similar.
The arctic curlew, a once common wading species in the tundra, has reached endangered status in a mere few decades. Those who account for the sudden loss of the bird do so in terms of either pollution or climate change. Fagen notes how hydrocarbons from oil tankers have increased in a proportion commensurate with the decline in the number of arctic curlew. He believes that the birds not only ingest hydrocarbons while wading in polluted water but that they also feed on worms and small fish that themselves have accrued modest amounts of hydrocarbons. Miller, on the other hand, believes that climate change alone can account for the depletion of the arctic curlew. She argues that since many of the areas in which it once fed no longer provide adequate sustenance, the bird has been forced to change migratory paths and must land in a foreign ecosystem where it is unable find adequate nutrition.
Both theories, however, are somewhat correct but not in a way that either Fagen or Miller would have likely anticipated. The theory positing that the hydrocarbons birds ingest affect their ability to navigate contains elements relating to both climate change and pollution. For example, when a curlew ingests tainted fish, its ability to navigate for several hours afterwards is diminished and it will often veer from its traditional path. Where at one point such deviations would not have affected the bird's ability to forage, climate change has resulted in a significant diminishment of areas capable of providing sufficient sustenance for the arctic curlew. As a result, it often succumbs to starvation.
The arctic curlew, a once common wading species in the tundra, has reached endangered status in a mere few decades. Those who account for the sudden loss of the bird do so in terms of either pollution or climate change. Fagen notes how hydrocarbons from oil tankers have increased in a proportion commensurate with the decline in the number of arctic curlew. He believes that the birds not only ingest hydrocarbons while wading in polluted water but that they also feed on worms and small fish that themselves have accrued modest amounts of hydrocarbons. Miller, on the other hand, believes that climate change alone can account for the depletion of the arctic curlew. She argues that since many of the areas in which it once fed no longer provide adequate sustenance, the bird has been forced to change migratory paths and must land in a foreign ecosystem where it is unable find adequate nutrition.
Both theories, however, are somewhat correct but not in a way that either Fagen or Miller would have likely anticipated. The theory positing that the hydrocarbons birds ingest affect their ability to navigate contains elements relating to both climate change and pollution. For example, when a curlew ingests tainted fish, its ability to navigate for several hours afterwards is diminished and it will often veer from its traditional path. Where at one point such deviations would not have affected the bird's ability to forage, climate change has resulted in a significant diminishment of areas capable of providing sufficient sustenance for the arctic curlew. As a result, it often succumbs to starvation.
The arctic curlew, a once common wading species in the tundra, has reached endangered status in a mere few decades. Those who account for the sudden loss of the bird do so in terms of either pollution or climate change. Fagen notes how hydrocarbons from oil tankers have increased in a proportion commensurate with the decline in the number of arctic curlew. He believes that the birds not only ingest hydrocarbons while wading in polluted water but that they also feed on worms and small fish that themselves have accrued modest amounts of hydrocarbons. Miller, on the other hand, believes that climate change alone can account for the depletion of the arctic curlew. She argues that since many of the areas in which it once fed no longer provide adequate sustenance, the bird has been forced to change migratory paths and must land in a foreign ecosystem where it is unable find adequate nutrition.
Both theories, however, are somewhat correct but not in a way that either Fagen or Miller would have likely anticipated. The theory positing that the hydrocarbons birds ingest affect their ability to navigate contains elements relating to both climate change and pollution. For example, when a curlew ingests tainted fish, its ability to navigate for several hours afterwards is diminished and it will often veer from its traditional path. Where at one point such deviations would not have affected the bird's ability to forage, climate change has resulted in a significant diminishment of areas capable of providing sufficient sustenance for the arctic curlew. As a result, it often succumbs to starvation.
Language acquisition has long been thought of as a process of imitation and reinforcement. Children learn to speak, in the popular view, by copying the utterances heard around them, and by having their response strengthened by the repetitions, corrections, and other reactions that adults provide. In recent years, it has become clear that this principle will not explain all the facts of language development. Children do imitate a great deal, especially in learning sounds and vocabulary; but little of their grammatical ability can be explained in this way. Two kinds of evidence are commonly used in support of this criticism–one based on the kind of language children produce, the other on what they do not produce.
The first piece of evidence derives from the way children handle irregular grammatical patterns. When they encounter such irregular past-tense forms as went and took or such plural forms as mice and sheep, there is a stage when they replace these by forms based on the regular patterns of the language. They say such things as wented, taked, mices, mouses, and sheeps. Evidently, children assume that grammatical usage is regular, and try to work out for themselves what the forms 'ought' to be–a reasoning process known as analogy. They could not have learned these forms by a process of imitation. The other kind of evidence is based on the way children seem unable to imitate adult grammatical constructions exactly, even when invited to do so.
Language acquisition has long been thought of as a process of imitation and reinforcement. Children learn to speak, in the popular view, by copying the utterances heard around them, and by having their response strengthened by the repetitions, corrections, and other reactions that adults provide. In recent years, it has become clear that this principle will not explain all the facts of language development. Children do imitate a great deal, especially in learning sounds and vocabulary; but little of their grammatical ability can be explained in this way. Two kinds of evidence are commonly used in support of this criticism–one based on the kind of language children produce, the other on what they do not produce.
The first piece of evidence derives from the way children handle irregular grammatical patterns. When they encounter such irregular past-tense forms as went and took or such plural forms as mice and sheep, there is a stage when they replace these by forms based on the regular patterns of the language. They say such things as wented, taked, mices, mouses, and sheeps. Evidently, children assume that grammatical usage is regular, and try to work out for themselves what the forms 'ought' to be–a reasoning process known as analogy. They could not have learned these forms by a process of imitation. The other kind of evidence is based on the way children seem unable to imitate adult grammatical constructions exactly, even when invited to do so.
Language acquisition has long been thought of as a process of imitation and reinforcement. Children learn to speak, in the popular view, by copying the utterances heard around them, and by having their response strengthened by the repetitions, corrections, and other reactions that adults provide. In recent years, it has become clear that this principle will not explain all the facts of language development. Children do imitate a great deal, especially in learning sounds and vocabulary; but little of their grammatical ability can be explained in this way. Two kinds of evidence are commonly used in support of this criticism–one based on the kind of language children produce, the other on what they do not produce.
The first piece of evidence derives from the way children handle irregular grammatical patterns. When they encounter such irregular past-tense forms as went and took or such plural forms as mice and sheep, there is a stage when they replace these by forms based on the regular patterns of the language. They say such things as wented, taked, mices, mouses, and sheeps. Evidently, children assume that grammatical usage is regular, and try to work out for themselves what the forms 'ought' to be–a reasoning process known as analogy. They could not have learned these forms by a process of imitation. The other kind of evidence is based on the way children seem unable to imitate adult grammatical constructions exactly, even when invited to do so.
In the 1860s, the German philologist Lazarus Geiger proposed that the subdivision of color always follows the same hierarchy. The simplest color lexicons (such as the DugermDani language of New Guinea) distinguish only black/dark and white/light. The next color to be given a separate word by cultures is always centered on the red part of the visible spectrum. Then, according to Geiger, societies will adopt a word corresponding to yellow, then green, then blue. Lazarus's color hierarchy was forgotten until restated in almost the same form in 1969 by Brent Berlin, an anthropologist, and Paul Kay, a linguist, when it was hailed as a major discovery in modern linguistics. It showed a universal regularity underlying the apparently arbitrary way language is used to describe the world.
Berlin and Kay's hypothesis has since fallen in and out of favor, and certainly there are exceptions to the scheme they proposed. But the fundamental color hierarchy, at least in the early stages (black/white, red, yellow/green, blue) remains generally accepted. The problem is that no one could explain why this ordering of color exists. Why, for example, does the blue of sky and sea, or the green of foliage, not occur as a word before the far less common red?
There are several schools of thought about how colors get named. “Nativists,” who include Berlin and Kay argue that the way in which we attach words to concepts is innately determined by how we perceive the world. In this view our perceptual apparatus has evolved to ensure that we make “sensible”—that is, useful—choices of what to label with distinct words: we are hardwired for practical forms of language. “Empiricists,” in contrast, argue that we don't need this innate programming, just the capacity to learn the conventional (but arbitrary) labels for things we can perceive.
In both cases, the categories of things to name are deemed “obvious”: language just labels them. But the conclusions of Loreto and colleagues fit with a third possibility: the “culturist” view, which says that shared communication is needed to help organize category formation, so that categories and language co-evolve in an interaction between biological predisposition and culture. In other words, the starting point for color terms is not some inevitably distinct block of the spectrum, but neither do we just divide up the spectrum in some arbitrary fashion, because the human eye has different sensitivity to different parts of the spectrum. Given this, we have to arrive at some consensus, not just on which label to use, but on what is being labeled.
In the 1860s, the German philologist Lazarus Geiger proposed that the subdivision of color always follows the same hierarchy. The simplest color lexicons (such as the DugermDani language of New Guinea) distinguish only black/dark and white/light. The next color to be given a separate word by cultures is always centered on the red part of the visible spectrum. Then, according to Geiger, societies will adopt a word corresponding to yellow, then green, then blue. Lazarus's color hierarchy was forgotten until restated in almost the same form in 1969 by Brent Berlin, an anthropologist, and Paul Kay, a linguist, when it was hailed as a major discovery in modern linguistics. It showed a universal regularity underlying the apparently arbitrary way language is used to describe the world.
Berlin and Kay's hypothesis has since fallen in and out of favor, and certainly there are exceptions to the scheme they proposed. But the fundamental color hierarchy, at least in the early stages (black/white, red, yellow/green, blue) remains generally accepted. The problem is that no one could explain why this ordering of color exists. Why, for example, does the blue of sky and sea, or the green of foliage, not occur as a word before the far less common red?
There are several schools of thought about how colors get named. “Nativists,” who include Berlin and Kay argue that the way in which we attach words to concepts is innately determined by how we perceive the world. In this view our perceptual apparatus has evolved to ensure that we make “sensible”—that is, useful—choices of what to label with distinct words: we are hardwired for practical forms of language. “Empiricists,” in contrast, argue that we don't need this innate programming, just the capacity to learn the conventional (but arbitrary) labels for things we can perceive.
In both cases, the categories of things to name are deemed “obvious”: language just labels them. But the conclusions of Loreto and colleagues fit with a third possibility: the “culturist” view, which says that shared communication is needed to help organize category formation, so that categories and language co-evolve in an interaction between biological predisposition and culture. In other words, the starting point for color terms is not some inevitably distinct block of the spectrum, but neither do we just divide up the spectrum in some arbitrary fashion, because the human eye has different sensitivity to different parts of the spectrum. Given this, we have to arrive at some consensus, not just on which label to use, but on what is being labeled.