• GMAT

    • TOEFL
    • IELTS
    • GRE
    • GMAT
    • 在线课堂
  • 首页
  • 练习
    我的练习
  • 模考
  • 题库
  • 提分课程
  • 备考资讯
  • 满分主讲
  • APP
  • 我的GMAT
    我的班课 我的1V1 练习记录 活动中心
登录

GMAT考满分·题库

搜索

收录题目9362道

搜索结果共14465条

来源 题目内容
On January 1st Thomas deposited $2,000 into an interest bearing checking account. If he made no withdrawals, what was the total amount Thomas had in the checking account on December 31st of the same year?(1) Thomas deposited an additional $4,000 throughout the year.(2) The checking account earned 7 percent simple annual interest.
What is the value of x?(1).$$\frac{1}{x^2}=\frac{1}{16}$$(2).$$\frac{1}{2}{x}=\frac{1}{8}$$
The number of violations of environmental regulations recorded each year is directly proportional to the number of Environmental Protection Agency (EPA) officials assigned to a particular industry: the more EPA officials there are to police it, the more a particular industry will be charged with violations. So, the allegedly environmentally insensitive chemical companies harm the environment no more than do other industries, but their violations provoke such strong public outcry and EPA reaction that they are charged with more violations than any other industry.Which one of the following, if true, would most seriously weaken the argument made by the author in the above passage?
Comparable worth, as a standard applied to eliminate inequities in pay, insists that the values of certain tasks performed in dissimilar jobs can be compared. In the last decade, this approach has become a critical social policy issue, as large numbers of private-sector firms and industries as well as federal, state, and local governmental entities have adopted comparable worth policies or begun to consider doing so.This widespread institutional awareness of comparable worth indicates increased public awareness that pay inequities--that is, situations in which pay is not "fair" because it does not reflect the true value of a job--exist in the labor market. However, the question still remains: have the gains already made in pay equity under comparable worth principles been of a precedent-setting nature or are they mostly transitory, a function of concessions made by employers to mislead female employees into believing that they have made long-term pay equity gains?Comparable worth pay adjustments are indeed precedent-setting. Because of the principles driving them, [hl:4]other mandates[/hl:4] that can be applied to reduce or eliminate unjustified pay gaps between male and female workers have not remedied perceived pay inequities satisfactorily for the litigants in cases in which men and women hold different jobs. But whenever comparable worth principles are applied to pay schedules, perceived unjustified pay differences are eliminated. In this sense, then, comparable worth is more comprehensive than other mandates, such as the Equal Pay Act of 1963 and Title VII of the Civil Rights Act of 1964. Neither compares tasks in dissimilar jobs (that is, jobs across occupational categories) in an effort to determine whether or not what is necessary to perform these tasks--know-how, problem-solving, and accountability--can be quantified in terms of its dollar value to the employer. Comparable worth, on the other hand, takes as its premise that certain tasks in dissimilar jobs may require a similar amount of training, effort, and skill; may carry similar responsibility; may be carried on in an environment having a similar impact upon the worker; and may have a similar dollar value to the employer.
The general density dependence model can be applied to explain the founding of specialist firms (those attempting to serve a narrow target market). According to this model, specialist foundings hinge on the interplay between legitimation and competitive forces, both of which are functions of the density (total number) of firms in a particular specialist population. Legitimation occurs as a new type of firm moves from being viewed as unfamiliar to being viewed as a natural way to organize. At low density levels, each founding increases legitimation, reducing barriers to entry and easing subsequent foundings. Competition occurs because the resources that firms seek--customers, suppliers, and employees--are limited, but as long as density is low relative to plentiful resources, the addition of another firm has a negligible impact on the intensity of competition. At high density levels, however, competitive effects outweigh legitimation effects, discouraging foundings. The more numerous the competitors, the fiercer the competition will be and the smaller will be the incentive for new firms to enter the field.While several studies have found a significant correspondence between the density dependence model and actual patterns of foundings, other studies have found patterns not consistent with the model. A possible explanation for this inconsistency is that legitimation and competitive forces transcend national boundaries, while studies typically restrict their analysis to the national level. Thus a national-level analysis can understate the true legitimation and competitive forces as well as the number of foundings in an industry that is internationally integrated. Many industries are or are becoming international, and since media and information easily cross national borders, so should legitimation and its effects on overseas foundings. For example, if a type of firm becomes established in the United States, that information transcends borders, reduces uncertainties, and helps foundings of that type of firm in other countries. Even within national contexts, studies have found more support for the density dependence model when they employ broader geographic units of analysis--for example, finding that the model's operation is seen more clearly at the state and national levels than at city levels.
In a new book about the antiparty feeling of the early political leaders of the United States, Ralph Ketcham argues that the first six Presidents differed decisively from later Presidents because the first six held values inherited from the classical humanist tradition of eighteenth-century England. In this view, government was designed not to satisfy the private desires of the people but to make them better citizens; this tradition stressed the disinterested devotion of political leaders to the public good. Justice, wisdom, and courage were more important qualities in a leader than the ability to organize voters and win elections. Indeed, leaders were supposed to be called to office rather than to run for office. And if they took up the burdens of public office with a sense of duty, leaders also believed that such offices were naturally their due because of their social preeminence or their contributions to the country. Given this classical conception of leadership, it is not surprising that the first six Presidents condemned political parties. Parties were partial by definition, self-interested, and therefore serving something other than the transcendent public good.Even during the first presidency (Washington's), however, the classical conception of virtuous leadership was being undermined by commercial forces that had been gathering since at least the beginning of the eighteenth century. Commerce--its profit-making, its self-interestedness, its individualism--became the enemy of these classical ideals. Although Ketcham does not picture the struggle in quite this way, he does rightly see Jackson's tenure (the seventh presidency) as the culmination of the acceptance of party, commerce, and individualism. For the Jacksonians, nonpartisanship lost its relevance, and under the direction of Van Buren, party gained a new legitimacy. The classical ideals of the first six Presidents became identified with a privileged aristocracy, an aristocracy that had to be overcome in order to allow competition between opposing political interests. Ketcham is so strongly committed to justifying the classical ideals, however, that he underestimates the advantages of their decline. For example, the classical conception of leadership was incompatible with our modern notion of the freedoms of speech and press, freedoms intimately associated with the legitimacy of opposing political parties.
Conventional wisdom has it that large deficits in the United States budget cause interest rates to rise. Two main arguments are given for this claim. According to the first, as the deficit increases, the government will borrow more to make up for the ensuing shortage of funds. Consequently, it is argued, if both the total supply of credit (money available for borrowing) and the amount of credit sought by nongovernment borrowers remain relatively stable, as is often supposed, then the price of credit (the interest rate) will increase. That this is so is suggested by the basic economic principle that if supplies of a commodity (here, credit) remain fixed and demand for that commodity increases, its price will also increase. The second argument supposes that the government will tend to finance its deficits by increasing the money supply with insufficient regard for whether there is enough room for economic growth to enable such an increase to occur without causing inflation. It is then argued that financiers will expect the deficit to cause inflation and will raise interest rates, anticipating that because of inflation the money they lend will be worth less when paid back.Unfortunately for the first argument, it is unreasonable to assume that nongovernment borrowing and the supply of credit will remain relatively stable. Nongovernment borrowing sometimes decreases. When it does, increased government borrowing will not necessarily push up the total demand for credit. Alternatively, when credit availability increases, for example through greater foreign lending to the United States, then interest rates need not rise, even if both private and government borrowing increase.The second argument is also problematic. Financing the deficit by increasing the money supply should cause inflation only when there is not enough room for economic growth. Currently, there is no reason to expect deficits to cause inflation. However, since many financiers believe that deficits ordinarily create inflation, then [hl:4]admittedly[/hl:4] they will be inclined to raise interest rates to offset mistakenly anticipated inflation. This effect, however, is due to ignorance, not to the deficit itself, and could be lessened by educating financiers on this issue.
Dendrochronology, the study of tree-ring records to glean information about the past, is possible because each year a tree adds a new layer of wood between the existing wood and the bark. In temperate and subpolar climates, cells added at the growing season's start are large and thin-walled, but later the new cells that develop are smaller and thick-walled; the growing season is followed by a period of dormancy. When a tree trunk is viewed in cross section, a boundary line is normally visible between the small-celled wood added at the end of the growing season in the previous year and the large-celled spring wood of the following year's growing season. The annual growth pattern appears as a series of larger and larger rings. In wet years rings are broad; during drought years they are narrow, since the trees grow less. Often, ring patterns of dead trees of different, but overlapping, ages can be correlated to provide an extended index of past climate conditions.However, trees that grew in areas with a steady supply of groundwater show little variation in ring width from year to year; these "complacent" rings tell nothing about changes in climate. And trees in extremely dry regions may go a year or two without adding any rings, thereby introducing [hl:2]uncertainties[/hl:2] into the count. Certain species sometimes add more than one ring in a single year, when growth halts temporarily and then starts again.
The United States hospital industry is an unusual market in that nonprofit and for-profit producers exist simultaneously. Theoretical literature offers conflicting views on whether nonprofit hospitals are less financially efficient. Theory suggests that nonprofit hospitals are so much more interested in offering high-quality service than in making money that they frequently input more resources to provide the same output of service as for-profit hospitals. This priority might also often lead them to be less vigilant in streamlining their services--eliminating duplication between departments, for instance. Conversely, while profit motive is thought to encourage for-profit hospitals to attain efficient production, most theorists admit that obstacles to that efficiency remain. For-profit hospital [hl:3]managers[/hl:3], for example, generally work independently of hospital owners and thus may not always make maximum financial efficiency their highest priority. The literature also suggests that widespread adoption of third-party payment systems may eventually eliminate any such potential differences between the two kinds of hospitals.The same literature offers similarly conflicting views of the efficiency of nonprofit hospitals from a social welfare perspective. Newhouse (1970) contends that nonprofit hospital managers unnecessarily expand the quality and quantity of hospital care beyond the actual needs of the community, while Weisbrod (1975) argues that nonprofit firms--hospitals included--contribute efficiently to community welfare by providing public services that might be inadequately provided by government alone.
The identification of femininity with morality and a belief in the innate moral superiority of women were fundamental to the cult of female domesticity in the nineteenth-century United States. Ironically, this ideology of female benevolence empowered women in the realm of social activism, enabling them to escape the confines of their traditional domestic spheres and to enter prisons, hospitals, battlefields, and slums. By following this path, some women came to wield considerable authority in the distribution of resources and services in their communities.The sentimentalized concept of female benevolence bore little resemblance to women's actual work, which was decidedly unsentimental and businesslike, in that it involved chartering societies, raising money, and paying salaries. Moreover, in the face of legal limitations on their right to control money and property, women had to find ingenious legal ways to run and finance organized philanthropy. In contrast to the day-to-day reality of this work, the idealized image of female benevolence lent a sentimental and gracious aura of altruism to the very real authority and privilege that some women commanded--which explains why some women activists clung tenaciously to this ideology. But clinging to this ideology also prevented these women from even attempting to gain true political power because it implied a moral purity that precluded participation in the messy world of partisan politics.
After the Second World War, unionism in the Japanese auto industry was company-based, with separate unions in each auto company. Most company unions played no independent role in bargaining shop-floor issues or pressing autoworkers' grievances. In a 1981 survey, for example, fewer than 1 percent of workers said they sought union assistance for work-related problems, while 43 percent said they turned to management instead. There was little to distinguish the two in any case: most union officers were foremen or middle-level managers, and the union's role was primarily one of passive support for company goals. Conflict occasionally disrupted this cooperative relationship--one company union's opposition to the productivity campaigns of the early 1980s has been cited as such a case. In 1986, however, a caucus led by the Foreman's Association forced the union's leadership out of office and returned the union's policy to one of passive cooperation. In the United States, the potential for such company unionism grew after 1979, but it had difficulty taking hold in the auto industry, where a single union represented workers from all companies, particularly since federal law prohibited foremen from joining or leading industrial unions.The Japanese model was often invoked as one in which authority decentralized to the shop floor empowered production workers to make key decisions. What these claims failed to recognize was that the actual delegation of authority was to the foreman, not the workers. The foreman exercised discretion over job assignments, training, transfers, and promotions; worker initiative was limited to suggestions that fine-tuned a management-controlled production process. Rather than being proactive, Japanese workers were forced to be reactive, the range of their responsibilities being far wider than their span of control. For example, the founder of one production system, Taichi Ohno, routinely gave department managers only 90 percent of the resources needed for production. As soon as workers could meet production goals without working overtime, 10 percent of remaining resources would be removed. Because the "OH! NO!" system continually pushed the production process to the verge of breakdown in an effort to find the minimum resource requirement, critics described it as "management by stress."
A key decision required of advertising managers is whether a "hard-sell" or "soft-sell" strategy Line is appropriate for a specific target market. The hard-sell approach involves the use of direct, forceful claims regarding the benefits of the advertised brand over competitors' offerings. In contrast, the soft-sell approach involves the use of advertising claims that imply superiority more subtly.One positive aspect of the hard-sell approach is its use of very simple and straightforward product claims presented claims presented as explicit conclusions, with little room for confusion regarding the advertiser's message. However, some consumers may resent being told what to believe and some may distrust the message. Resentment and distrust often lead to counterargumentation and to boomerang effects where consumers come to believe conclusions diametrically opposed to conclusions endorsed in adverrising claims, By contrast, the risk of boomerang erects is greatly reduced with soft-sell approaches. One way to implement the soft-sell approach is to provide information that implies the main conclusions the advertiser wants the consumer to draw, but leave the conclusions themselves unstated. Because consumers are invited to make up their own minds, implicit conclusions reduce the risk of resentment, distrust, and counter argumentation.Recent [hl:2]research[/hl:2] on consumer memory and judgment suggests another advantage of implicit conclusions. Beliefs or conclusions that are self-generated are more accessible from memory than beliefs from conclusions provided explicitly by other individuals, and thus have a greater impact on judgment and decision making. Moreover, self-generated beliefs are often perceived as more accurate and valid than the beliefs of others, because other individuals may be perceived as less knowledgeable, or may be perceived as manipulative or deliberately misleading.Despite these advantages, implicit conclusions may not always be more effective than explicit Conclusions. One risk is that some consumers may fail to draw their own conclusions and thus miss the point of the message. .Inferential activity is likely only when consumers are motivated and able to engage in effortful cognitive processes. Another risk is that some con-Summers may draw conclusions Other than the one intended ,Even if inferential activity is likely there is no guarantee that consumers will follow the path provided by the advertiser. Finally, a third risk is that consumers may infer the intended conclusion but question the validity of their inference.
The dry mountain ranges of the Western United States contain rocks dating back 440 to 510 million years, Line to the Ordovician period, and teeming with evidence of tropical marine life.This rock record provides clues about one of the most significant radiations (periods when existing life-forms gave rise to variations that would eventually) evolve into entirely new species) in the history of marine invertebrates. During this radiation the number of marine biological families increased greatly, and these families included species that would dominate the marine ecosystems of the area for the next 215 million years. Although the radiation spanned tens of millions of years, major changes in many species occurred during a geologically short time span within the radiation and, furthermore, appear to have occurred worldwide, suggesting that external events were major factors in the radiation. And, in fact, there is evidence of major ecological and geological changes during this period: the sea level dropped drastically and mountain ranges were formed, in this instance, rather than leading to large-scale extinctions, these kinds of environmental changes may have resulted in an enriched pattern of habitats and nutrients, which in turn gave rise to the Ordovician radiation, However, the actual relationship between these environmental factors and the diversification of life forms is not yet fully understood
TechniquesIsland Museum analyzes historical artifacts using one or more techniques described below—all but one of which is performed by an outside laboratory—to obtain specific information about an object's creation. For each type of material listed, the museum uses only the technique described:Animal teeth or bones: The museum performs isotope ratio mass spectrometry (IRMS) in-house to determine the ratios of chemical elements present, yielding dues as to the animal's diet and the minerals in its water supply.Metallic ores or alloys: Inductively coupled plasma mass spectrometry (ICP-MS) is used to determine the ratios of traces of metallic isotopes present, which differ according to where the sample was obtained.Plant matter: While they are living, plants absorb carbon-14, which decays at a predictable rate after death; thus radiocarbon dating is used to estimate a plant's date of death.Fired-clay objects: Thermoluminescence (TL) dating is used to provide an estimate of the time since clay was fired to create the object.ArtifactsIsland Museum has acquired a collection of metal, fired clay, stone, bone, and wooden artifacts found on the Kaxna Islands, and presumed to be from the Kaxna Kingdom of 1250-850 BC. Researchers have mapped all the mines, quarries, and sources of clay on Kaxna and know that wooden artifacts of that time were generally created within 2 years after tree harvest. There is, however, considerable uncertainty as to whether these artifacts were actually created on Kaxna.In analyzing these artifacts, the museum assumes that radiocarbon dating is accurate to approximately ±200 years and TL dating is accurate to approximately ±100 years.BudgetFor outside laboratory tests, the museum's first-year budget for the Kaxna collection allows unlimited IRMS testing, and a total of S7,000— equal to the cost of 4 TL tests plus 15 radiocarbon tests, or the cost of 40 ICP-MS tests—for all other tests. For each technique applied by an outside lab, the museum is charged a fixed price per artifact.
Village SitesAn archaeological team has been excavating three ancient village sites—Barras, Agna, and Cussaia—looking in particular at kitchen waste dumps as a way to understand the villages' dietary patterns and trading relationships. What follows are brief summaries of their findings.Barras: The best data come from stratified finds in this oceanside village, which was inhabited from AD 600 to 1300 and was the only one of the three villages to produce seafood, its main dietary item. Though Barras residents hunted on land and raised crops, this provided relatively small amounts of food. As Barras's overall prosperity rose, there was more food available per person, and its population increased from an average of 100 residents in the AD 600s to 400 residents in the AD 1000s to 600 residents in the AD 1200s.Agna: Agna was established in an inland forest around AD 800 and its residents mainly hunted but also ate considerable amounts of fruit, nuts, and other forest-vegetable products. They also traded meat to Barras for other goods. With no open fields, Agna grew no grain.Cussaia: Predating Barras, Cussaia depended heavily on raising grain crops and eventually obtained seafood and meat via trade. It traded directly only with Barras, because a mountain range separated it from Agna, though some products may have been traded between Agna and Cussaia via Barras.Additionally, there is no evidence that any other village traded with Barras, Agna, or Cussaia prior to AD 1300.Food VarietyBarras: Percentages, by Estimated Weight, of Dietary Items Consumed per Person per MonthGMAT、gmat题库、gmat模考、gmat考满分Food ConsumptionBarras, Agna: Estimated Average Monthly Meat and Seafood Consumption (lb per 4-Person Family)GMAT、gmat题库、gmat模考、gmat考满分
Despite an abundance of major nutrients in the surface waters of parts of the ocean, extremely low concentrations of dissolved iron are believed to play a crucial role in limiting the biological productivity of these remote regions. Phytoplankton, the basis of freshwater food chains and all aerobic life as well as the source of most of Earth's atmospheric oxygen, require iron for various biochemical processes. Thus, a lack of iron in surface waters has [hl:1]detrimental effects[/hl:1].In temperate and tropical oceans, iron reaches surface waters via the dissolution of eolian- transported continental dust. Previously, little was known about iron distribution in the surface waters of non-temperate oceans such as the Arctic Ocean. Recent advances, however, have resulted in an analytical methodology capable of determining iron concentrations in ambient surface waters. Studies indicate that concentrations across the Arctic Basin are relatively high and quite variable, ranging from 3.2 nM in the western Arctic to 0.75 nM in the Nansen Basin.The highest values of iron concentration occur in regions with ice floes containing significant quantities of surface sediment. The hypothesis that ice-rafted sediment is the source of high iron values is bolstered by the presence of large amounts of aluminum in the same regions. The entrainment of sediments from the edge of the basin into floes during the winter freezing process along with the subsequent advection and partial melting of the ice at the center of the basin provides a means of transporting reactive trace metals, such as iron, to the center of the basin. The partial melting of floes during the summer appears sufficient to transport high concentrations of iron to both surface and stratified waters. It seems, however, that any change resulting in the diminution of ice- edge freezing in winter might lead to significant changes in the nature and magnitude of primary productivity in the central Arctic.
Although epidemics are often triggered by bacteria and viruses that have undergone genetic mutations, as was the case with the Human Immunodeficiency Virus (HIV), which mutated into a harmful virus when it was transmitted from monkeys to humans, outbreaks of other diseases are caused by bacteria and viruses whose genetic make-ups have not undergone significant changes. In many cases, such diseases spread as a result of social factors.Tuberculosis (TB) is a preventable and treatable disease that continues to infect thousands of Americans each year. The widespread global utilization of the BCG vaccine and antibiotics, in addition to generally improved public health, led to a dramatic reduction in both the number of deaths attributed to tuberculosis globally and in the economic burden of the disease between 1940 and 1980. But the short-term success of these tools led to complacency and a decreased interest on the part of governments and pharmaceutical companies in TB research and development. What resulted in the late 1980s in the United States, spurred by the spread of HIV and by the increase in homelessness, incarceration, and intravenous drug use, was a 20 percent increase in TB rates. These TB outbreaks were difficult to control and extremely costly, given that the health infrastructure for dealing with the infection had been allowed to deteriorate due to a lack of funding. In New York City alone, more than $1 billion was needed to regain control of TB.Today, the United States faces three significant challenges to the elimination of TB. First, our progress in reducing the TB case rate in the United States has stalled. Between 1993 and 2000, the nation's TB rate fell by 7.3 percent, but from 2000 to 2006, the rate of decline slowed to 3.8 percent. This is occuring at a time when domestic TB control categorical funding has been stagnant for a decade. As the history of TB in the United States has demonstrated, complacency and neglect of TB control programs can lead to costly resurgences of the disease.
Between 1999 and 2006, there were two episodes during which inflation in the Consumer Price Index (CPI) diverged markedly from inflation in the index for Owner's Equivalent Rent (OER); early in 2007, these series began to diverge again. Such divergence often prompts many to question CPI methods. A key difference between these two series is that OER indexes are based upon rents which have received a utilities adjustment-an adjustment which is necessary because the OER index is intended to track pure rent-of-shelter, not shelter-plus-utilities. Critics have claimed that the CPI-OER inflation divergences stem from inappropriate utilities adjustment.This claim is false. There is only one divergence episode-of only six months duration—which is primarily attributable to the utilities adjustment procedure. Indeed, the utilities adjustment sometimes reduced potential divergence between the two series. Instead, the main factor is rental market segmentation; that is, different rent inflation rates were experienced by different parts of the rental market. Before 2003, the CPI-OER inflation divergence mainly resulted from divergent rental inflation rates within metropolitan areas: areas with a higher proportion of renters experienced higher rental inflation. Compared to other units, rent control units experienced higher inflation in 2004 (and, to a lesser extent, before mid-2001 and in 2006), which increased CPI Inflation but not OER inflation. Finally, in early 2007, there was a sizable divergence between OER and CPI inflation, again driven by divergent rental inflation rates within metropolitan areas. The extent of the divergence only becomes evident once the effect of the utilities adjustment is accounted for.
For years scientists have argued about what exactly caused the extinction of the dinosaurs. A key to this question is dinosaur physiology. One topic of hot debate among paleontologists is thermoregulation or control of body temperature. All animals exhibit some form of thermoregulation through migration, perspiration, shivering, or hibernation; yet they fall into two distinct groups. Ectothermy is a form of thermoregulation, which requires that the animal regulate its temperature through behavior and autonomic, external environment. Reptiles and amphibians are ectotherms, which mean they gain heat from an outside energy source. They require little food, as their metabolic rate is low, however, ectotherms cannot withstand extreme cold. On the other hand, mammals and birds, are endothermic. They have a tachymetabolic (high-speed) metabolism, which produces body heat internally.Endothermy is a highly effective but expensive state, as the elevated metabolic rate requires a high caloric intake each day. Because dinosaurs are extinct, their physiology cannot be measured by usual scientific method (temperature measurement, record of food consumption, and output of carbon dioxide and solid waste). Fossils initially indicated to early scientists that dinosaurs were ectotherms, because of their external physical similarity to modern-day reptiles (jawbones, scales, etc.). However, in recent years, scientists have reversed their preliminary findings as modern indirect evidence seems to suggest that dinosaurs were endotherms.The debate is still open. Two dinosaur clans were herbivores with such developed and intricate dental equipment that scientists assume they must have been tachymetabolic, or endothermic. However, other dinosaurs lack those evolutions. Bipedal dinosaurs are thought to have been very fleet, suggesting endothermic behavior. Also leading scientists to that conclusion were the talons on their feet; balancing must have required a high degree of agility and activity consistent with endothermic behavior. Another compelling form of evidence is the fact that most dinosaurs stood upright, which means that their hearts had to be sophisticated and powerful enough to pump blood to their elevated brains. Scientists believe that dinosaurs had to possess the double-pump heart found in other endotherms. However, the fossil record is incomplete; scientists are basing their assumptions on partial evidence.Scientists can draw no definitive conclusions. Just because dinosaurs had the ability to be endothermic, doesn't mean they were, at least, not exclusively. Modern technology has made possible the operation of cars, houses, appliances, etc. by solar energy; still, society has not given up its use of electricity, gas and oil. Current speculation among scientists places dinosaurs somewhere in between both extremes, a hypothesis which could lead to a new chapter in the story of evolutionary theory.
The Mediterranean Sea sedimentary record reveals intermittent black sediment layers that seem to represent environmental changes over time. These layers, darkened by organic matter, rare in most ocean sediments, indicate the presence of a strong, physical control. Some geologists credit "reverse circulation," wherein waters flowing inward from the Atlantic Ocean are flushed back out in a reverse flow by the constant influx of fresh water from the several rivers that feed the sea. As surface waters evaporate, the heavily salted remnant, containing less dissolved oxygen, sinks. During intensely wet periods, soil nutrients fertilize the surface, phytoplankton production increases, and evaporation dissipates, creating an organically rich environment.However, sedimentary records show bottom-dwellers coexisting with the dark layers were oxygen-stressed, so, reverse circulation could not have existed. Milankovitch Cycles, associated with earth-sun positional relationships, which change the dates of the equinoxes/solstices in 20,000 year cycles, are more likely. Climatic effects include intense rainy seasons depending on the earth's tilt, orbital eccentricity, and distance from the sun. Additionally, the 20,000 year cycles match the spacing of dark layers in the Mediterranean record.
  • ‹
  • 1
  • 2
  • ...
  • 635
  • 636
  • 637
  • 638
  • 639
  • 640
  • 641
  • ...
  • 723
  • 724
  • ›