In the mid-1970's, Walter Alvarez, a geologist, was studying Earth's polarity. It had recently been learned that the orientation of the planet's magnetic field reverses, so that every so often, in effect, south becomes north and vice versa. Alvarez and some colleagues had found that a certain formation of pinkish limestone in Italy, known as the scaglia rossa, recorded these occasional reversals. The limestone also contained the fossilized remains of millions of tiny sea creatures called foraminifera. Alvarez became interested in a thin layer of clay in the limestone that seemed to have been laid down around the end of the Cretaceous Period. Below the layer, certain species of foraminifera—or forams, for short—were preserved. In the clay layer, there were no forams. Above the layer, the earlier species disappeared and new forams appeared. Having been taught the uniformitarian view, which held that any apparent extinctions throughout geological time resulted from 'the incompleteness of the fossil record' rather than an actual extinction, Alvarez was not sure what to make of the lacuna in geological time corresponding to the missing foraminifera, because the change looked very abrupt.
Had Walter Alvarez not asked his father, the Nobel Prize-winning physicist Luis Alvarez, how long the clay had taken to deposit, the younger Alvarez may not have thought to use iridium, an element rarely found on earth but more plentiful in meteorites, to answer this question. Iridium, in the form of microscopic grains of cosmic dust, is constantly raining down on the planet. The Alvarezes reasoned that if the clay layer had taken a significant amount of time to deposit, it would contain detectable levels of iridium. The results were startling: far too much iridium had shown up. The Alvarez hypothesis, as it became known, was that everything—not just the clay layer—could be explained by a single event: a six-mile-wide asteroid had slammed into Earth, killing off not only the forams but also the dinosaurs and all the other organisms that went extinct at the end of the Cretaceous period.
In the mid-1970's, Walter Alvarez, a geologist, was studying Earth's polarity. It had recently been learned that the orientation of the planet's magnetic field reverses, so that every so often, in effect, south becomes north and vice versa. Alvarez and some colleagues had found that a certain formation of pinkish limestone in Italy, known as the scaglia rossa, recorded these occasional reversals. The limestone also contained the fossilized remains of millions of tiny sea creatures called foraminifera. Alvarez became interested in a thin layer of clay in the limestone that seemed to have been laid down around the end of the Cretaceous Period. Below the layer, certain species of foraminifera—or forams, for short—were preserved. In the clay layer, there were no forams. Above the layer, the earlier species disappeared and new forams appeared. Having been taught the uniformitarian view, which held that any apparent extinctions throughout geological time resulted from 'the incompleteness of the fossil record' rather than an actual extinction, Alvarez was not sure what to make of the lacuna in geological time corresponding to the missing foraminifera, because the change looked very abrupt.
Had Walter Alvarez not asked his father, the Nobel Prize-winning physicist Luis Alvarez, how long the clay had taken to deposit, the younger Alvarez may not have thought to use iridium, an element rarely found on earth but more plentiful in meteorites, to answer this question. Iridium, in the form of microscopic grains of cosmic dust, is constantly raining down on the planet. The Alvarezes reasoned that if the clay layer had taken a significant amount of time to deposit, it would contain detectable levels of iridium. The results were startling: far too much iridium had shown up. The Alvarez hypothesis, as it became known, was that everything—not just the clay layer—could be explained by a single event: a six-mile-wide asteroid had slammed into Earth, killing off not only the forams but also the dinosaurs and all the other organisms that went extinct at the end of the Cretaceous period.
In the mid-1970's, Walter Alvarez, a geologist, was studying Earth's polarity. It had recently been learned that the orientation of the planet's magnetic field reverses, so that every so often, in effect, south becomes north and vice versa. Alvarez and some colleagues had found that a certain formation of pinkish limestone in Italy, known as the scaglia rossa, recorded these occasional reversals. The limestone also contained the fossilized remains of millions of tiny sea creatures called foraminifera. Alvarez became interested in a thin layer of clay in the limestone that seemed to have been laid down around the end of the Cretaceous Period. Below the layer, certain species of foraminifera—or forams, for short—were preserved. In the clay layer, there were no forams. Above the layer, the earlier species disappeared and new forams appeared. Having been taught the uniformitarian view, which held that any apparent extinctions throughout geological time resulted from 'the incompleteness of the fossil record' rather than an actual extinction, Alvarez was not sure what to make of the lacuna in geological time corresponding to the missing foraminifera, because the change looked very abrupt.
Had Walter Alvarez not asked his father, the Nobel Prize-winning physicist Luis Alvarez, how long the clay had taken to deposit, the younger Alvarez may not have thought to use iridium, an element rarely found on earth but more plentiful in meteorites, to answer this question. Iridium, in the form of microscopic grains of cosmic dust, is constantly raining down on the planet. The Alvarezes reasoned that if the clay layer had taken a significant amount of time to deposit, it would contain detectable levels of iridium. The results were startling: far too much iridium had shown up. The Alvarez hypothesis, as it became known, was that everything—not just the clay layer—could be explained by a single event: a six-mile-wide asteroid had slammed into Earth, killing off not only the forams but also the dinosaurs and all the other organisms that went extinct at the end of the Cretaceous period.
In the mid-1970's, Walter Alvarez, a geologist, was studying Earth's polarity. It had recently been learned that the orientation of the planet's magnetic field reverses, so that every so often, in effect, south becomes north and vice versa. Alvarez and some colleagues had found that a certain formation of pinkish limestone in Italy, known as the scaglia rossa, recorded these occasional reversals. The limestone also contained the fossilized remains of millions of tiny sea creatures called foraminifera. Alvarez became interested in a thin layer of clay in the limestone that seemed to have been laid down around the end of the Cretaceous Period. Below the layer, certain species of foraminifera—or forams, for short—were preserved. In the clay layer, there were no forams. Above the layer, the earlier species disappeared and new forams appeared. Having been taught the uniformitarian view, which held that any apparent extinctions throughout geological time resulted from 'the incompleteness of the fossil record' rather than an actual extinction, Alvarez was not sure what to make of the lacuna in geological time corresponding to the missing foraminifera, because the change looked very abrupt.
Had Walter Alvarez not asked his father, the Nobel Prize-winning physicist Luis Alvarez, how long the clay had taken to deposit, the younger Alvarez may not have thought to use iridium, an element rarely found on earth but more plentiful in meteorites, to answer this question. Iridium, in the form of microscopic grains of cosmic dust, is constantly raining down on the planet. The Alvarezes reasoned that if the clay layer had taken a significant amount of time to deposit, it would contain detectable levels of iridium. The results were startling: far too much iridium had shown up. The Alvarez hypothesis, as it became known, was that everything—not just the clay layer—could be explained by a single event: a six-mile-wide asteroid had slammed into Earth, killing off not only the forams but also the dinosaurs and all the other organisms that went extinct at the end of the Cretaceous period.
One reason we are able to recognize speech, despite all the acoustic variation in the signal, and even in very difficult listening conditions, is that the speech situation contains a great deal of redundancy—more information than is strictly necessary to decode the message. There is, firstly, our general ability to make predictions about the nature of speech, based on our previous linguistic experience—our knowledge of the speakers, subject matter, language, and so on. But in addition, the wide range of frequencies found in every signal presents us with far more information than we need in order to recognize what is being said. As a result, we are able to focus our auditory attention on just the relevant distinguishing features of the signal—features that have come to be known as acoustic cues.
What are these cues, and how can we prove their role in the perception of speech? It is not possible to obtain this information simply by carrying out an acoustic analysis of natural speech: this would tell us what acoustic information is present but not what features of the signal are actually used by listeners in order to identify speech sounds. The best an acoustic description can do is give us a rough idea as to what a cue might be. But to learn about listeners' perception, we need a different approach.
One reason we are able to recognize speech, despite all the acoustic variation in the signal, and even in very difficult listening conditions, is that the speech situation contains a great deal of redundancy—more information than is strictly necessary to decode the message. There is, firstly, our general ability to make predictions about the nature of speech, based on our previous linguistic experience—our knowledge of the speakers, subject matter, language, and so on. But in addition, the wide range of frequencies found in every signal presents us with far more information than we need in order to recognize what is being said. As a result, we are able to focus our auditory attention on just the relevant distinguishing features of the signal—features that have come to be known as acoustic cues.
What are these cues, and how can we prove their role in the perception of speech? It is not possible to obtain this information simply by carrying out an acoustic analysis of natural speech: this would tell us what acoustic information is present but not what features of the signal are actually used by listeners in order to identify speech sounds. The best an acoustic description can do is give us a rough idea as to what a cue might be. But to learn about listeners' perception, we need a different approach.
One reason we are able to recognize speech, despite all the acoustic variation in the signal, and even in very difficult listening conditions, is that the speech situation contains a great deal of redundancy—more information than is strictly necessary to decode the message. There is, firstly, our general ability to make predictions about the nature of speech, based on our previous linguistic experience—our knowledge of the speakers, subject matter, language, and so on. But in addition, the wide range of frequencies found in every signal presents us with far more information than we need in order to recognize what is being said. As a result, we are able to focus our auditory attention on just the relevant distinguishing features of the signal—features that have come to be known as acoustic cues.
What are these cues, and how can we prove their role in the perception of speech? It is not possible to obtain this information simply by carrying out an acoustic analysis of natural speech: this would tell us what acoustic information is present but not what features of the signal are actually used by listeners in order to identify speech sounds. The best an acoustic description can do is give us a rough idea as to what a cue might be. But to learn about listeners' perception, we need a different approach.
Demotic Greek (language of the people) is the modern vernacular form of the Greek language, and refers particularly to the form of the language that evolved naturally from ancient Greek, in opposition to the artificially archaic Katharevousa, which was the official standard until 1976. The two complemented each other in a typical example of diglossia, or the existence of two forms of a language (usually a “high” and a “low”) employed by the same speaker depending on the social context, until the resolution of the Greek language question in favor of Demotic.
Demotic is often thought to be the same as the modern Greek language, but these two terms are not completely synonymous. While Demotic is a term applied to the naturally evolved colloquial language of the Greeks, the modern Greek language of today is more like a fusion of Demotic and Katharevousa; it can be viewed as a variety of Demotic which has been enriched by "educated" elements. Therefore, it is not wrong to call the spoken language of today Demotic, though such a terminology ignores the fact that modern Greek contains - especially in a written or official form - numerous words, grammatical forms and phonetical features that did not exist in colloquial speech and only entered the language through its archaic variety. Additionally, even the most archaic forms of Katharevousa were never thought of as ancient Greek, but were always called "modern Greek," so that the phrase "modern Greek" applies to Demotic, Standard Modern Greek and even Katharevousa.
Demotic Greek (language of the people) is the modern vernacular form of the Greek language, and refers particularly to the form of the language that evolved naturally from ancient Greek, in opposition to the artificially archaic Katharevousa, which was the official standard until 1976. The two complemented each other in a typical example of diglossia, or the existence of two forms of a language (usually a “high” and a “low”) employed by the same speaker depending on the social context, until the resolution of the Greek language question in favor of Demotic.
Demotic is often thought to be the same as the modern Greek language, but these two terms are not completely synonymous. While Demotic is a term applied to the naturally evolved colloquial language of the Greeks, the modern Greek language of today is more like a fusion of Demotic and Katharevousa; it can be viewed as a variety of Demotic which has been enriched by "educated" elements. Therefore, it is not wrong to call the spoken language of today Demotic, though such a terminology ignores the fact that modern Greek contains - especially in a written or official form - numerous words, grammatical forms and phonetical features that did not exist in colloquial speech and only entered the language through its archaic variety. Additionally, even the most archaic forms of Katharevousa were never thought of as ancient Greek, but were always called "modern Greek," so that the phrase "modern Greek" applies to Demotic, Standard Modern Greek and even Katharevousa.
Demotic Greek (language of the people) is the modern vernacular form of the Greek language, and refers particularly to the form of the language that evolved naturally from ancient Greek, in opposition to the artificially archaic Katharevousa, which was the official standard until 1976. The two complemented each other in a typical example of diglossia, or the existence of two forms of a language (usually a “high” and a “low”) employed by the same speaker depending on the social context, until the resolution of the Greek language question in favor of Demotic.
Demotic is often thought to be the same as the modern Greek language, but these two terms are not completely synonymous. While Demotic is a term applied to the naturally evolved colloquial language of the Greeks, the modern Greek language of today is more like a fusion of Demotic and Katharevousa; it can be viewed as a variety of Demotic which has been enriched by "educated" elements. Therefore, it is not wrong to call the spoken language of today Demotic, though such a terminology ignores the fact that modern Greek contains - especially in a written or official form - numerous words, grammatical forms and phonetical features that did not exist in colloquial speech and only entered the language through its archaic variety. Additionally, even the most archaic forms of Katharevousa were never thought of as ancient Greek, but were always called "modern Greek," so that the phrase "modern Greek" applies to Demotic, Standard Modern Greek and even Katharevousa.
In the 1860s, the German philologist Lazarus Geiger proposed that the subdivision of color always follows the same hierarchy. The simplest color lexicons (such as the DugermDani language of New Guinea) distinguish only black/dark and white/light. The next color to be given a separate word by cultures is always centered on the red part of the visible spectrum. Then, according to Geiger, societies will adopt a word corresponding to yellow, then green, then blue. Lazarus's color hierarchy was forgotten until restated in almost the same form in 1969 by Brent Berlin, an anthropologist, and Paul Kay, a linguist, when it was hailed as a major discovery in modern linguistics. It showed a universal regularity underlying the apparently arbitrary way language is used to describe the world.
Berlin and Kay's hypothesis has since fallen in and out of favor, and certainly there are exceptions to the scheme they proposed. But the fundamental color hierarchy, at least in the early stages (black/white, red, yellow/green, blue) remains generally accepted. The problem is that no one could explain why this ordering of color exists. Why, for example, does the blue of sky and sea, or the green of foliage, not occur as a word before the far less common red?
There are several schools of thought about how colors get named. “Nativists,” who include Berlin and Kay argue that the way in which we attach words to concepts is innately determined by how we perceive the world. In this view our perceptual apparatus has evolved to ensure that we make “sensible”—that is, useful—choices of what to label with distinct words: we are hardwired for practical forms of language. “Empiricists,” in contrast, argue that we don't need this innate programming, just the capacity to learn the conventional (but arbitrary) labels for things we can perceive.
In both cases, the categories of things to name are deemed “obvious”: language just labels them. But the conclusions of Loreto and colleagues fit with a third possibility: the “culturist” view, which says that shared communication is needed to help organize category formation, so that categories and language co-evolve in an interaction between biological predisposition and culture. In other words, the starting point for color terms is not some inevitably distinct block of the spectrum, but neither do we just divide up the spectrum in some arbitrary fashion, because the human eye has different sensitivity to different parts of the spectrum. Given this, we have to arrive at some consensus, not just on which label to use, but on what is being labeled.
In the 1860s, the German philologist Lazarus Geiger proposed that the subdivision of color always follows the same hierarchy. The simplest color lexicons (such as the DugermDani language of New Guinea) distinguish only black/dark and white/light. The next color to be given a separate word by cultures is always centered on the red part of the visible spectrum. Then, according to Geiger, societies will adopt a word corresponding to yellow, then green, then blue. Lazarus's color hierarchy was forgotten until restated in almost the same form in 1969 by Brent Berlin, an anthropologist, and Paul Kay, a linguist, when it was hailed as a major discovery in modern linguistics. It showed a universal regularity underlying the apparently arbitrary way language is used to describe the world.
Berlin and Kay's hypothesis has since fallen in and out of favor, and certainly there are exceptions to the scheme they proposed. But the fundamental color hierarchy, at least in the early stages (black/white, red, yellow/green, blue) remains generally accepted. The problem is that no one could explain why this ordering of color exists. Why, for example, does the blue of sky and sea, or the green of foliage, not occur as a word before the far less common red?
There are several schools of thought about how colors get named. “Nativists,” who include Berlin and Kay argue that the way in which we attach words to concepts is innately determined by how we perceive the world. In this view our perceptual apparatus has evolved to ensure that we make “sensible”—that is, useful—choices of what to label with distinct words: we are hardwired for practical forms of language. “Empiricists,” in contrast, argue that we don't need this innate programming, just the capacity to learn the conventional (but arbitrary) labels for things we can perceive.
In both cases, the categories of things to name are deemed “obvious”: language just labels them. But the conclusions of Loreto and colleagues fit with a third possibility: the “culturist” view, which says that shared communication is needed to help organize category formation, so that categories and language co-evolve in an interaction between biological predisposition and culture. In other words, the starting point for color terms is not some inevitably distinct block of the spectrum, but neither do we just divide up the spectrum in some arbitrary fashion, because the human eye has different sensitivity to different parts of the spectrum. Given this, we have to arrive at some consensus, not just on which label to use, but on what is being labeled.
Unlike Mercury and Mars, Venus has a dense, opaque atmosphere that prevents direct observation of its surface. For years, surface telescopes on Earth could glean no information about the surface of Venus. In 1989, the Magellan probe was launched to do a five-year radar-mapping of the entire surface of Venus. The data that emerged provided by far the most detailed map of the Venusian surface ever seen.
The surface shows an unbelievable level of volcanic activity: more than one hundred large shield volcanoes, many more than Earth has, and a solidified river of lava longer than the Nile. The entire surface is volcanically dead, with not a single active volcano. This surface is relatively young in planetary terms, about 300 million years old. The whole surface, planet-wide, is the same age: the even pattern of craters, randomly distributed across the surface, demonstrates this.
To explain this puzzling surface, Turcotte suggested a radical model. The surface of Venus, for a period, is as it is now, a surface of uniform age with no active volcanism. While the surface is fixed, volcanic pressure builds up inside the planet. At a certain point, the pressure ruptures the surface, and the entire planet is re-coated in lava in a massive planet-wide outburst of volcanism. Having spent all this thermal energy in one gigantic outpouring, the surface cools and hardens, again producing the kind of surface we see today. Turcotte proposed that this cycle repeated several times in the past, and would still repeat in the future.
To most planetary geologists, Turcotte's model is a return to catastrophism. For two centuries, geologist of all kinds fought against the idea of catastrophic, planet-wide changes, such as the Biblical idea of Noah's Flood. The triumph of gradualism was essential to the success of geology as a serious science. Indeed, all features of Earth's geology and all features of other moons and planets in the Solar System, even those that are not volcanically active, are explained very well by current gradualist models. Planetary geologists question why all other objects would obey gradualist models, and only Venus would obey a catastrophic model. These geologists insist that the features of Venus must be able to be explained in terms of incremental changes continuously over a long period.
Turcotte, expecting these objections, points out that no incremental process could result in a planet-wide surface all the same age. Furthermore, a slow process of continual change does not well explain why a planet with an astounding history of volcanic activity is now volcanically dead. Turcotte argues that only his catastrophic model adequately explains the extremes of the Venusian surface.
As to when the first people populated the American subcontinent is hotly debated. Until recently, the Clovis people, based on evidence found in New Mexico, were thought to have been the first to have arrived, some 13,000 years ago. Yet evidence gathered from other sites suggest the Americas had been settled at least 1,000 years prior to the Clovis. The “Clovis first” idea, nonetheless, was treated as gospel, backed by supporters who, at least initially, outright discounted any claims that suggested precedence by non-Clovis people. While such a stance smacked of fanaticism, proponents did have a solid claim: if the Clovis peoples crossed the Bering Strait 13,000 years ago, only after it had become ice-free, how would a people have been able to make a similar trip but over ice?
A recent school of thought, backed by Weber, provides the following answer: pre-Clovis people reached the Americas by relying on a sophisticated maritime culture, which allowed them to take advantage of refugia, or small areas in which aquatic life flourished. Thus they were able to make the long journey by hugging the coast as far south as to what is today British Columbia. Additionally, they were believed to have fashioned a primitive form of crampon so that they would be able to dock in these refugia and avail themselves of the microfauna. Still, how such a culture developed in the first place remains unanswered.
The Solutrean theory has been influential in answering this question, a fact that may seem paradoxical--and startling--to those familiar with its line of reasoning: the Clovis people were actually Solutreans, an ancient seafaring culture along the Iberian peninsula, who had--astoundingly given the time period--crossed into the Americas via the Atlantic ocean. Could not a similar Siberian culture, if not the pre-Clovis themselves, have displayed equal nautical sophistication?
Even if one subscribes to this line of reasoning, the “Clovis first” school still have an objection: proponents of a pre-Clovis people rely solely on the Monte Verde site in Chile, a site so far south that its location invites yet another question: What of the 6,000 miles of coastline between the ice corridor and Monte Verde? Besides remains found in network of caves in Oregon, there has been scant evidence of a pre-Clovis peoples. Nonetheless, Meade and Pizinsky claim that a propitious geologic accident could account for this discrepancy: Monte Verde was located near a peat bog that essentially fossilized the village. Archaeologists uncovered two wooden stakes, which, at one time, were used in twelve huts. Furthermore plant species associated with areas 150 miles away were found, suggesting a trade network. These findings indicate that the Clovis may not have been the first to people the Americas, yet more excavation, both in Monte Verde and along the coast, must be conducted in order to determine the extent of pre-Clovis settlements in the Americas.
As to when the first people populated the American subcontinent is hotly debated. Until recently, the Clovis people, based on evidence found in New Mexico, were thought to have been the first to have arrived, some 13,000 years ago. Yet evidence gathered from other sites suggest the Americas had been settled at least 1,000 years prior to the Clovis. The “Clovis first” idea, nonetheless, was treated as gospel, backed by supporters who, at least initially, outright discounted any claims that suggested precedence by non-Clovis people. While such a stance smacked of fanaticism, proponents did have a solid claim: if the Clovis peoples crossed the Bering Strait 13,000 years ago, only after it had become ice-free, how would a people have been able to make a similar trip but over ice?
A recent school of thought, backed by Weber, provides the following answer: pre-Clovis people reached the Americas by relying on a sophisticated maritime culture, which allowed them to take advantage of refugia, or small areas in which aquatic life flourished. Thus they were able to make the long journey by hugging the coast as far south as to what is today British Columbia. Additionally, they were believed to have fashioned a primitive form of crampon so that they would be able to dock in these refugia and avail themselves of the microfauna. Still, how such a culture developed in the first place remains unanswered.
The Solutrean theory has been influential in answering this question, a fact that may seem paradoxical--and startling--to those familiar with its line of reasoning: the Clovis people were actually Solutreans, an ancient seafaring culture along the Iberian peninsula, who had--astoundingly given the time period--crossed into the Americas via the Atlantic ocean. Could not a similar Siberian culture, if not the pre-Clovis themselves, have displayed equal nautical sophistication?
Even if one subscribes to this line of reasoning, the “Clovis first” school still have an objection: proponents of a pre-Clovis people rely solely on the Monte Verde site in Chile, a site so far south that its location invites yet another question: What of the 6,000 miles of coastline between the ice corridor and Monte Verde? Besides remains found in network of caves in Oregon, there has been scant evidence of a pre-Clovis peoples. Nonetheless, Meade and Pizinsky claim that a propitious geologic accident could account for this discrepancy: Monte Verde was located near a peat bog that essentially fossilized the village. Archaeologists uncovered two wooden stakes, which, at one time, were used in twelve huts. Furthermore plant species associated with areas 150 miles away were found, suggesting a trade network. These findings indicate that the Clovis may not have been the first to people the Americas, yet more excavation, both in Monte Verde and along the coast, must be conducted in order to determine the extent of pre-Clovis settlements in the Americas.
As to when the first people populated the American subcontinent is hotly debated. Until recently, the Clovis people, based on evidence found in New Mexico, were thought to have been the first to have arrived, some 13,000 years ago. Yet evidence gathered from other sites suggest the Americas had been settled at least 1,000 years prior to the Clovis. The “Clovis first” idea, nonetheless, was treated as gospel, backed by supporters who, at least initially, outright discounted any claims that suggested precedence by non-Clovis people. While such a stance smacked of fanaticism, proponents did have a solid claim: if the Clovis peoples crossed the Bering Strait 13,000 years ago, only after it had become ice-free, how would a people have been able to make a similar trip but over ice?
A recent school of thought, backed by Weber, provides the following answer: pre-Clovis people reached the Americas by relying on a sophisticated maritime culture, which allowed them to take advantage of refugia, or small areas in which aquatic life flourished. Thus they were able to make the long journey by hugging the coast as far south as to what is today British Columbia. Additionally, they were believed to have fashioned a primitive form of crampon so that they would be able to dock in these refugia and avail themselves of the microfauna. Still, how such a culture developed in the first place remains unanswered.
The Solutrean theory has been influential in answering this question, a fact that may seem paradoxical--and startling--to those familiar with its line of reasoning: the Clovis people were actually Solutreans, an ancient seafaring culture along the Iberian peninsula, who had--astoundingly given the time period--crossed into the Americas via the Atlantic ocean. Could not a similar Siberian culture, if not the pre-Clovis themselves, have displayed equal nautical sophistication?
Even if one subscribes to this line of reasoning, the “Clovis first” school still have an objection: proponents of a pre-Clovis people rely solely on the Monte Verde site in Chile, a site so far south that its location invites yet another question: What of the 6,000 miles of coastline between the ice corridor and Monte Verde? Besides remains found in network of caves in Oregon, there has been scant evidence of a pre-Clovis peoples. Nonetheless, Meade and Pizinsky claim that a propitious geologic accident could account for this discrepancy: Monte Verde was located near a peat bog that essentially fossilized the village. Archaeologists uncovered two wooden stakes, which, at one time, were used in twelve huts. Furthermore plant species associated with areas 150 miles away were found, suggesting a trade network. These findings indicate that the Clovis may not have been the first to people the Americas, yet more excavation, both in Monte Verde and along the coast, must be conducted in order to determine the extent of pre-Clovis settlements in the Americas.
As to when the first people populated the American subcontinent is hotly debated. Until recently, the Clovis people, based on evidence found in New Mexico, were thought to have been the first to have arrived, some 13,000 years ago. Yet evidence gathered from other sites suggest the Americas had been settled at least 1,000 years prior to the Clovis. The “Clovis first” idea, nonetheless, was treated as gospel, backed by supporters who, at least initially, outright discounted any claims that suggested precedence by non-Clovis people. While such a stance smacked of fanaticism, proponents did have a solid claim: if the Clovis peoples crossed the Bering Strait 13,000 years ago, only after it had become ice-free, how would a people have been able to make a similar trip but over ice?
A recent school of thought, backed by Weber, provides the following answer: pre-Clovis people reached the Americas by relying on a sophisticated maritime culture, which allowed them to take advantage of refugia, or small areas in which aquatic life flourished. Thus they were able to make the long journey by hugging the coast as far south as to what is today British Columbia. Additionally, they were believed to have fashioned a primitive form of crampon so that they would be able to dock in these refugia and avail themselves of the microfauna. Still, how such a culture developed in the first place remains unanswered.
The Solutrean theory has been influential in answering this question, a fact that may seem paradoxical--and startling--to those familiar with its line of reasoning: the Clovis people were actually Solutreans, an ancient seafaring culture along the Iberian peninsula, who had--astoundingly given the time period--crossed into the Americas via the Atlantic ocean. Could not a similar Siberian culture, if not the pre-Clovis themselves, have displayed equal nautical sophistication?
Even if one subscribes to this line of reasoning, the “Clovis first” school still have an objection: proponents of a pre-Clovis people rely solely on the Monte Verde site in Chile, a site so far south that its location invites yet another question: What of the 6,000 miles of coastline between the ice corridor and Monte Verde? Besides remains found in network of caves in Oregon, there has been scant evidence of a pre-Clovis peoples. Nonetheless, Meade and Pizinsky claim that a propitious geologic accident could account for this discrepancy: Monte Verde was located near a peat bog that essentially fossilized the village. Archaeologists uncovered two wooden stakes, which, at one time, were used in twelve huts. Furthermore plant species associated with areas 150 miles away were found, suggesting a trade network. These findings indicate that the Clovis may not have been the first to people the Americas, yet more excavation, both in Monte Verde and along the coast, must be conducted in order to determine the extent of pre-Clovis settlements in the Americas.
Most educated people of the eighteenth century, such as the Founding Fathers, subscribed to Natural Rights Theory, the idea that every human being has a considerable number of innate rights, simply by virtue of being a human person. When the US Constitution was sent to the states for ratification, many at that time felt that the federal government outlined by the Constitution would be too strong, and that rights of individual citizens against the government had to be clarified. This led to the Bill of Rights, the first ten amendments, which were ratified at the same time as the Constitution. The first eight of these amendments list specific rights of citizens. Some leaders feared that listing some rights could be interpreted to mean that citizens didn't have other, unlisted rights. Toward this end, James Madison and others produced the Ninth Amendment, which states: the fact that certain rights are listed in the Constitution shall not be construed to imply that other rights of the people are denied.
Constitutional traditionalists interpret the Ninth Amendment as a rule for reading the rest of the constitution. They would argue that "Ninth Amendment rights" are a misconceived notion: the amendment does not, by itself, create federally enforceable rights. In particular, this strict reasoning would be opposed to the creation of any new rights based on the amendment. Rather, according to this view, the amendment merely protects those rights that citizens already have, whether they are explicitly listed in the Constitution or simply implicit in people's lives and in American tradition.
More liberal interpreters of the US Constitution have a much more expansive view of the Ninth Amendment. In their view, the Ninth Amendment guarantees to American citizens a vast universe of potential rights, some of which we have enjoyed for two centuries, and others of which the Founding Fathers could not possibly have conceived. These scholars point out that some rights, such as voting rights of women or minorities, were not necessarily viewed as rights by the majority of citizens in late eighteenth century America, but are taken as fundamental and unquestionable in modern America. While those rights cited are protected specifically by other amendments and laws, the argument asserts that other unlisted right also could evolve from unthinkable to perfectly acceptable, and the Ninth Amendment would protect these as-yet-undefined rights.
Most educated people of the eighteenth century, such as the Founding Fathers, subscribed to Natural Rights Theory, the idea that every human being has a considerable number of innate rights, simply by virtue of being a human person. When the US Constitution was sent to the states for ratification, many at that time felt that the federal government outlined by the Constitution would be too strong, and that rights of individual citizens against the government had to be clarified. This led to the Bill of Rights, the first ten amendments, which were ratified at the same time as the Constitution. The first eight of these amendments list specific rights of citizens. Some leaders feared that listing some rights could be interpreted to mean that citizens didn't have other, unlisted rights. Toward this end, James Madison and others produced the Ninth Amendment, which states: the fact that certain rights are listed in the Constitution shall not be construed to imply that other rights of the people are denied.
Constitutional traditionalists interpret the Ninth Amendment as a rule for reading the rest of the constitution. They would argue that "Ninth Amendment rights" are a misconceived notion: the amendment does not, by itself, create federally enforceable rights. In particular, this strict reasoning would be opposed to the creation of any new rights based on the amendment. Rather, according to this view, the amendment merely protects those rights that citizens already have, whether they are explicitly listed in the Constitution or simply implicit in people's lives and in American tradition.
More liberal interpreters of the US Constitution have a much more expansive view of the Ninth Amendment. In their view, the Ninth Amendment guarantees to American citizens a vast universe of potential rights, some of which we have enjoyed for two centuries, and others of which the Founding Fathers could not possibly have conceived. These scholars point out that some rights, such as voting rights of women or minorities, were not necessarily viewed as rights by the majority of citizens in late eighteenth century America, but are taken as fundamental and unquestionable in modern America. While those rights cited are protected specifically by other amendments and laws, the argument asserts that other unlisted right also could evolve from unthinkable to perfectly acceptable, and the Ninth Amendment would protect these as-yet-undefined rights.
Most educated people of the eighteenth century, such as the Founding Fathers, subscribed to Natural Rights Theory, the idea that every human being has a considerable number of innate rights, simply by virtue of being a human person. When the US Constitution was sent to the states for ratification, many at that time felt that the federal government outlined by the Constitution would be too strong, and that rights of individual citizens against the government had to be clarified. This led to the Bill of Rights, the first ten amendments, which were ratified at the same time as the Constitution. The first eight of these amendments list specific rights of citizens. Some leaders feared that listing some rights could be interpreted to mean that citizens didn't have other, unlisted rights. Toward this end, James Madison and others produced the Ninth Amendment, which states: the fact that certain rights are listed in the Constitution shall not be construed to imply that other rights of the people are denied.
Constitutional traditionalists interpret the Ninth Amendment as a rule for reading the rest of the constitution. They would argue that "Ninth Amendment rights" are a misconceived notion: the amendment does not, by itself, create federally enforceable rights. In particular, this strict reasoning would be opposed to the creation of any new rights based on the amendment. Rather, according to this view, the amendment merely protects those rights that citizens already have, whether they are explicitly listed in the Constitution or simply implicit in people's lives and in American tradition.
More liberal interpreters of the US Constitution have a much more expansive view of the Ninth Amendment. In their view, the Ninth Amendment guarantees to American citizens a vast universe of potential rights, some of which we have enjoyed for two centuries, and others of which the Founding Fathers could not possibly have conceived. These scholars point out that some rights, such as voting rights of women or minorities, were not necessarily viewed as rights by the majority of citizens in late eighteenth century America, but are taken as fundamental and unquestionable in modern America. While those rights cited are protected specifically by other amendments and laws, the argument asserts that other unlisted right also could evolve from unthinkable to perfectly acceptable, and the Ninth Amendment would protect these as-yet-undefined rights.