• GMAT

    • TOEFL
    • IELTS
    • GRE
    • GMAT
    • 在线课堂
  • 首页
  • 练习
    我的练习
  • 模考
  • 题库
  • 提分课程
  • 备考资讯
  • 满分主讲
  • APP
  • 我的GMAT
    我的班课 我的1V1 练习记录 活动中心
登录

GMAT考满分·题库

搜索

收录题目9362道

搜索结果共2655条

来源 题目内容
GMAT、gmat题库、gmat模考、gmat考满分The graphic displays results of an Internet survey on the effectiveness and popularity of various treatments for depression. The scale on the horizontal axis represents popularity, P, defined as the fraction of all respondents who tried the treatment; the scale on the vertical axis represents effectiveness, E, defined as the fraction of all respondents who considered a given treatment to be effective.On the basis of the information provided, select from each of the drop-down menus the option that creates the most accurate statement.The most effective treatment used by the greatest number of people is .Among the labeled treatments tried by more than half of the respondents, was the least effective.
Ready4 <p class="ng-scope">     It has been estimated that over 20% of the annual gross domestic product in the United States is the result of innovation backed at some point by venture capital investors. But what is innovation? One traditional view of innovation is that it is a systematic business process occurring within an organization required to secure ongoing financial growth. But much of the most acclaimed and influential innovation has started with an individual's idea and only somewhat later followed with an organization to execute on that idea, so the organizational definition is of limited relevance.      A more practical definition of innovation is that it is the creation of anything new intended to be commercialized. Under such a definition, the efforts of a lone individual developing a radical idea and those of a department within a large company to explore a new adjacent market are both examples of innovation. This somewhat loose definition, however, fails to address explicitly what makes an innovation truly new, successful, or authentic, although it may imply that all innovation is equally valid in a sense. Otherwise, the oft-repeated challenge to uses of the term innovation may put too little emphasis on the activity and too much on its results. Quite possibly, 80% of the value of innovation has been contributed by 20% of the activity, but whether that 20% of activity could have manifested itself without a culture and economy to support the whole is less clear. In this regard, policy- and strategy-oriented attempts to refine this loose definition of innovation further are without merit.    p>
Ready4 <p class="ng-scope">     The term "glass ceiling" as a discriminatory barrier limiting females from reaching senior management positions was used in the early 1990s, around the time that females first surpassed males in annual university degrees obtained in the United States. Studies of employment in various cities, such as the 2003 study of employment data in Sweden conducted by Albrecht, Bjӧrklund, and Vroman, have found a consistent gap between men's and women's wages after these are controlled for gender differences in age, education level, education field, sector, industry, and occupation. However, empirical studies as early that of Powell and Butterfield in 1994 have suggested that gender, as a job-irrelevant variable in consideration of promotions to top management positions, may actually work to women's advantage. Whereas the gender gap in pay is strongly supported by data, the glass-ceiling notion itself as a discriminatory force has been harder to account for in empirically proven terms.      Studies that have been considered by some a partial repudiation of the glass-ceiling theory have indicated that men and women differ in their preferences for competition and that such differences impact economic outcomes. If women are less likely to compete, they are less likely to enter competitive situations and hence less likely to win. For example, in a laboratory experiment featuring a non-competitive option and a competitive incentive scheme, men selected the latter twice as often as did women of equal ability. One explanation is that men are inherently more competitive; another is that the social influences limiting women's presence in executive leadership generally make their impact long before women are near the ceiling.    p>
Ready4 <p class="ng-scope">In the figure above, find the value of c - b.p>
C13 <p class="ng-scope">   M31, M32, and M33 are members of the Local Group, an assemblage of more than 54 galaxies in the neighborhood of the Milky Way, the galaxy which contains our solar system. Like the Milky Way, M31 and M33 are spiral galaxies, whereas M32 is a dwarf elliptical galaxy. Of the three, M31, also known as the Andromeda Galaxy, is the largest, with a mass that has been estimated in recent studies to be equal to or greater than that of the Milky Way. Comprised primarily of older faint stars, M32 is a substantially smaller galaxy and a satellite of Andromeda. M33, known as the Triangulum Galaxy, is more distant and less massive than Andromeda and is believed to have collided with that galaxy in the past.      The attributes of these four galaxies may reflect their past interactions and are likely to shape future encounters. For example, astronomers are not currently sure how M32's compact ellipsoid shape took form, but they suspect that M32 may have had a spiral shape earlier that was transformed by a tidal field from Andromeda into its current elliptical shape. Meanwhile, Triangulum and Andromeda are connected by a stream of hydrogen and stars, which is evidence that the two galaxies have interacted in the past between 2 and 8 billion years ago. Finally, among the trio M31, M33 and the Milky, every pair is potentially on a collision course compelled by gravity. Triangulum might be ripped apart and absorbed by M31, it might collide with the Milky Way before the latter has any violent interaction with Andromeda, or it might participate in the collision between the Milky Way and the Andromeda Galaxy, which is expected to occur in about 4 billion years.    p>
C13 <p class="ng-scope">Many corporations, educational institutions, and publishers have made efforts in recent years to shift from paper-based reading to electronic and online reading in order to save paper. According to this logic, paper is gleaned from trees, which must be replenished, and the energy required to recycle paper waste is another blow to the environment as well as to organizations' finances. However, researcher Emma Gardsdale sees more gray area in the issue. First, in order for e-reading to be practical, tablets and e-readers are required. Because these devices have a limited lifespan of one to four years, they create e-waste which puts poisons such as arsenic, lead, and poly-brominated flame retardants into landfills, where they can leach into water supplies and soil. Second, e-reading has a hidden cost: the electricity required to power servers, computers and mobile devices. Gardsdale also points out that the ecological balance between paper reading and e-reading depends significantly on the amount of reading that one does. An e-reader, for example, will have a smaller carbon footprint than paper only if it is used for at least a certain volume of reading. Gardsdale argues that the current all-encompassing enthusiasm for the ecological promise of electronic reading isn't the quick ecological fix it may seem to be. Over the long term, she says, both paper and electronic reading will likely have a place in our lives. Each method's role will depend on the energy-saving and waste-minimizing technologies that become available in the coming years, and on whether corporations that manufacture mobile reading devices are willing to take responsibility for the impact of their devices' life cycles.p>
Ready4 <p class="ng-scope">Many corporations, educational institutions, and publishers have made efforts in recent years to shift from paper-based reading to electronic and online reading in order to save paper. According to this logic, paper is gleaned from trees, which must be replenished, and the energy required to recycle paper waste is another blow to the environment as well as to organizations' finances. However, researcher Emma Gardsdale sees more gray area in the issue. First, in order for e-reading to be practical, tablets and e-readers are required. Because these devices have a limited lifespan of one to four years, they create e-waste which puts poisons such as arsenic, lead, and poly-brominated flame retardants into landfills, where they can leach into water supplies and soil. Second, e-reading has a hidden cost: the electricity required to power servers, computers and mobile devices. Gardsdale also points out that the ecological balance between paper reading and e-reading depends significantly on the amount of reading that one does. An e-reader, for example, will have a smaller carbon footprint than paper only if it is used for at least a certain volume of reading. Gardsdale argues that the current all-encompassing enthusiasm for the ecological promise of electronic reading isn't the quick ecological fix it may seem to be. Over the long term, she says, both paper and electronic reading will likely have a place in our lives. Each method's role will depend on the energy-saving and waste-minimizing technologies that become available in the coming years, and on whether corporations that manufacture mobile reading devices are willing to take responsibility for the impact of their devices' life cycles.p>
Ready4 <p style="text-align: center;">p><p>If arc above is a semicircle, what is the length of ?p> <p>(1) p> <p>(2) p>
Ready4 <p>p><p>When the figure above is cut along the solid lines, folded along the dashed lines, and taped along the solid lines, the result is a model of a prism with a base that is a hexagon. What is the sum of the number of edges and the number of faces of this prism?p>
C13 <p class="ng-scope">     How aquatic vertebrates evolved into land vertebrates has been difficult for evolutionary biologists to study, in part because the shift from water to land appears to have occurred rapidly and has thus has yielded a scarce fossil record. Prior to the advent of DNA sequencing, the primary guideposts in tracing the emergence of tetrapods had been morphological considerations, which highlighted the coelacanth and the lungfish as species of interest.      Coelacanths and lungfish are distinct from other fish in that they are lobe-finned species. Lobe-finned species, like ray-finned fishes such as tuna and trout, possess not cartilage but a bony skeleton, a key prerequisite for survival on land. Lobe-finned fish species are distinguished from ray-finned species by fins that are joined to a single bone, and which thus have the potential to evolve into limbs. Coelacanths and lungfish are two of the only lobe-finned species that are not extinct, and since they have evolved minimally since the time of the appearance of tetrapods, they are sometimes referred to as "living fossils." In fact, the first live coelacanth was discovered more than 100 years after the species had been discovered in fossilized form.      Whether the coelacanth in particular is rightly called a living fossil and whether it is the closest living relative of the original tetrapods are two questions that have been illuminated recently by genetic analysis. The coelacanth's sequenced genome, and this analysis has led to the conclusion that the lungfish is the closer relative of tetrapods. Moreover, coelacanth DNA has shown evolution over time, although at a rate much slower than that of most animals. The fish's morphology and its environment deep in the Indian Ocean may have created favorable conditions, allowing a more slowly evolving species to have survived for the last 400 million years.    p>
Ready4 <p class="ng-scope">     How aquatic vertebrates evolved into land vertebrates has been difficult for evolutionary biologists to study, in part because the shift from water to land appears to have occurred rapidly and has thus has yielded a scarce fossil record. Prior to the advent of DNA sequencing, the primary guideposts in tracing the emergence of tetrapods had been morphological considerations, which highlighted the coelacanth and the lungfish as species of interest.      Coelacanths and lungfish are distinct from other fish in that they are lobe-finned species. Lobe-finned species, like ray-finned fishes such as tuna and trout, possess not cartilage but a bony skeleton, a key prerequisite for survival on land. Lobe-finned fish species are distinguished from ray-finned species by fins that are joined to a single bone, and which thus have the potential to evolve into limbs. Coelacanths and lungfish are two of the only lobe-finned species that are not extinct, and since they have evolved minimally since the time of the appearance of tetrapods, they are sometimes referred to as "living fossils." In fact, the first live coelacanth was discovered more than 100 years after the species had been discovered in fossilized form.      Whether the coelacanth in particular is rightly called a living fossil and whether it is the closest living relative of the original tetrapods are two questions that have been illuminated recently by genetic analysis. The coelacanth's sequenced genome, and this analysis has led to the conclusion that the lungfish is the closer relative of tetrapods. Moreover, coelacanth DNA has shown evolution over time, although at a rate much slower than that of most animals. The fish's morphology and its environment deep in the Indian Ocean may have created favorable conditions, allowing a more slowly evolving species to have survived for the last 400 million years.    p>
Ready4 <p class="ng-scope">   In a recent telephone survey of over 6,000 Americans, the Pew Internet & American Life Project has concluded that African Americans' usage of Internet technology lags behind that of whites. Survey respondents who identified as African Americans trailed whites by seven percentage points in use of the Internet; 87% of whites and 80% of blacks are Internet users. Moreover, 74% of white respondents had broadband Internet access in their home, whereas 62% of black respondents had such access.      Although the Pew survey appears to draw on a representative slice of Americans using careful survey methods, its results may suffer from having answered an inherently misleading question. The survey found, for example, that black and white Internet usage and access is identical once other variables are controlled. Namely, 86% of African Americans respondents aged 18-29 were home broadband adopters, as were 88% of African American college graduates and 91% of blacks with an annual household income of $75,000 or more per year. These figures were not only well above the national average for broadband adoption, but, more to the point, they were identical to whites of similar ages, incomes, and education levels. It follows that Internet adoption has nothing to do with race per se and everything to do with some or all of the factors age, education, and household income. If Internet adoption correlates primarily with household income, as other studies of technology would suggest, then the survey in question does little more than lead us back to the fact that African Americans have a lower average household income than white Americans--a fact which has already been established. Nevertheless, the Pew study is a confirmation and a reminder of the fact that the current income difference between whites and blacks in America is having an impact on African Americans' access to technology and to the benefits that accrue from efficient access to the Internet.  p>
Ready4 <p class="ng-scope">     Cap and Trade (a government measure to incentivize the reduction of carbon emissions by business entities through financial means) has become a popular method to reduce air pollution by nations across the world, with many witnessing substantial reductions in CO2 emissions since its implementation. Carlos Sangster and Stephen Saifuddin recognize that this development is a positive one but stipulate that Cap and Trade in itself may actually increase the long-term likelihood of environmental catastrophe. Emissions trading measures, like Cap and Trade, incentivize certain companies to reduce their emissions but allow companies that can afford to do so to continue to produce emissions at historic levels; indeed, the largest companies often accept the costs of Cap and Trade measures because, despite moderately increased overhead, they can do so and still sustain record profits and growth. Moreover, there is no guarantee that short-term reductions in carbon emissions will translate into reductions in the total amount of fossil fuels used, because even into the far future, fossil fuels might easily remain more cost-effective than more environmentally-friendly alternatives. Even if government regulations were to make carbon emissions vastly more expensive, the current fossil fuel-based economic model would, were it to grow or even remain at its current size, produce more atmospheric carbon dioxide and global warming than would an economic model that put a hard cap on the amount of carbon emissions that could be produced. Sangster and Saifuddin argue that to effectively protect the planet's environmental health and encourage a more sustainable fuel economy, governments must implement measures that mandate environmentally-friendly energy sources and keep as-yet untapped fossil fuels safely underground. Unduly emphasizing halfway measures like Cap and Trade, which appeal to business entities and political interests with a vested stake in the prolonged use of fossil fuels, may distract global regulators from policies that might prove more effective in averting the disastrous effects of climate change.p>
Ready4 <p style="text-align: center;">p><p>The circle graph above represents the total amount of items sold at Store , broken down into six categories of different types of items they sell. If the center of the circle is and Store sells a total of 5,800 items, how many items are in category ?p> <p>(1) p> <p>(2) Combining all categories, except for , accounts for in the graph.p>
C13 <p class="ng-scope">     Cap and Trade (a government measure to incentivize the reduction of carbon emissions by business entities through financial means) has become a popular method to reduce air pollution by nations across the world, with many witnessing substantial reductions in CO2 emissions since its implementation. Carlos Sangster and Stephen Saifuddin recognize that this development is a positive one but stipulate that Cap and Trade in itself may actually increase the long-term likelihood of environmental catastrophe. Emissions trading measures, like Cap and Trade, incentivize certain companies to reduce their emissions but allow companies that can afford to do so to continue to produce emissions at historic levels; indeed, the largest companies often accept the costs of Cap and Trade measures because, despite moderately increased overhead, they can do so and still sustain record profits and growth. Moreover, there is no guarantee that short-term reductions in carbon emissions will translate into reductions in the total amount of fossil fuels used, because even into the far future, fossil fuels might easily remain more cost-effective than more environmentally-friendly alternatives. Even if government regulations were to make carbon emissions vastly more expensive, the current fossil fuel-based economic model would, were it to grow or even remain at its current size, produce more atmospheric carbon dioxide and global warming than would an economic model that put a hard cap on the amount of carbon emissions that could be produced. Sangster and Saifuddin argue that to effectively protect the planet's environmental health and encourage a more sustainable fuel economy, governments must implement measures that mandate environmentally-friendly energy sources and keep as-yet untapped fossil fuels safely underground. Unduly emphasizing halfway measures like Cap and Trade, which appeal to business entities and political interests with a vested stake in the prolonged use of fossil fuels, may distract global regulators from policies that might prove more effective in averting the disastrous effects of climate change.p>
Ready4 <p class="ng-scope">     The controversial Canadian media intellectual Marshall McLuhan first began to garner public attention with his book The Mechanical Bridge in 1951, precisely during the time when North America was first gripped by and attempting to come to grips with the influence of television programming and advertising on society. One of McLuhan's core theses was that every communication medium, including the television, has inherent effects apart from those that any artist or businessperson willfully creates through it and that these effects are not always positive.      McLuhan achieved the height of public attention in part by emulating the advertisers he studied, inventing memorable phrases to convey his points (such as "the medium is the message," "turn on, tune in, drop out," and "global village"). Arguably, however, he never expected or even hoped to deflect substantially the tide of the technological and social forces in play at the time. He likened the successful reader of his works to the sailor in Edgar Allan Poe's story "A Descent into the Maelstrom," who saves himself by studying a whirlpool and by moving with, not against, its current.      The media thinker's legacy is in equal parts inevitable and inconsequential. The advent of the internet, which he had predicted thirty years prior, and of subsequent technologies would force society to broaden its perspective of media channels and examine their impact more closely. On the other hand, in the present milieu, where media professionals and advertisers tend to speak of "channels" and "content" as well-defined and non-overlapping components of communication, McLuhan's primary message appears to been lost among all the new mediums.p>
Ready4 <p>On the circle above with center O, the length of arc PQR is 12 \pi. What is the circumference of the circle?p>
Ready4 <p style="text-align: center;">p><p>If arc above is a semicircle, what is the length of segment ?p> <p>(1) p> <p>(2) p>
C13 <p>     The diversity of species in bacterial communities is often studied by phenotypic characterization. A problem with this method is that phenotypic methods can be used only on bacteria which can be isolated and cultured, and most soil bacteria that have been observed by fluorescence microscope cannot be isolated and cultured.     DNA can be isolated from bacteria in soil to obtain genetic information about the nonculturable bacteria therein. The heterogeneity of this DNA is a measure of the total number of genetically different bacteria, or the number of species. DNA heterogeneity can be determined by thermal denaturation and reassociation. In general, renaturation of homologous single-stranded DNA follows second-order reaction kinetics. In other words, the fraction of DNA that has renatured within a given time period is proportional to the genome size or the complexity of DNA, defined as the number of nucleotides in the DNA of a haploid cell, without repetitive DNA. The genetic diversity of a bacterial community can be inferred in a similar manner.     Vigdis Torsvik, Jostein Goksyr, and Frida Lise Daae used this process to analyze soil samples taken from the soil from a beech forest north of Bergen, Norway. The reassociation curves for the main DNA fraction did not follow ideal second-order reaction kinetics, so the half-life values gave only approximate, underestimated values for the number of genomes present. Nevertheless, the soil bacterium DNA was very heterogeneous; the diversity corresponded to about 4,000 distinct genomes of a size typical of standard soil bacteria. This diversity was about 200 times as many species as could have been isolated and cultured.     Various procedures for isolating DNA from river sediments and seawater are known. This opens up the possibility of applying the thermal denaturation method to systems other than soil. The results of the Norway study indicated that the genetic diversity of the total bacterial community in a deciduous-forest soil is so high that heterogeneity can be determined only approximately. In environments with pollution or extreme conditions, the genetic diversity might be easier to determine precisely.  p>
Ready4 <p class="ng-scope">     The release of accumulated stress at a particular point during an earthquake alters the shear and normal stresses along the tectonic plate boundary and surrounding fault lines. According to geophysical theory, the Coulomb Failure Stress Change, which is the estimated alteration and resultant transfer of shear and normal stresses along a plate margin, is a function of change in shear stress along a fault and secondarily the change in normal stress along a fault and the change in pore pressure in the fault zone, the latter two factors scaling according to the friction coefficient characteristic to the plate margin.     By measuring and calculating the stress transfer following seismic activity, it is possible to subsequently construct basic contour maps of regions where there have been positive stress changes and are therefore of higher risk of being potential epicenters of future large earthquakes. Calculations have revealed that when an earthquake occurs, approximately 80% of the energy is released as seismic waves, whereas the remaining 20% is stored and transferred to different locations along the fault, making those specific regions more susceptible to future earthquakes.     Predicting earthquakes by using the theory of stress transfer has important potential applications. The main rival technique for forecasting, the statistical analysis of patterns in seismic activity, suffers from a contradiction. Foreshocks are deemed evidence of the potential for a future high-magnitude earthquake, but the lack of foreshocks along faults known to be active has been considered an equally plausible potential precursor for large events.     The stress transfer theory has been used to predict the location of a magnitude 7.4 earthquake that occurred two years later in the port city of Izmit, Turkey, killing more than 30,000 people. A limitation of the theory as currently applied, due to insufficient understanding of plate kinematics, is that refining predictions with temporal constraints appears to be far more problematic; the team that gave the Izmit prediction had forecast an event near the city within thirty years.    p>
  • ‹
  • 1
  • 2
  • ...
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • ›