• GMAT

    • TOEFL
    • IELTS
    • GRE
    • GMAT
    • 在线课堂
  • 首页
  • 练习
    我的练习
  • 模考
  • 题库
  • 提分课程
  • 备考资讯
  • 满分主讲
  • APP
  • 我的GMAT
    我的班课 我的1V1 练习记录 活动中心
登录

GMAT考满分·题库

搜索

收录题目9362道

搜索结果共14465条

来源 题目内容
Ready4

One of the most asked questions has always been: How was the universe created? Many once believed that the universe had no beginning or end and was truly infinite. With the advent of the Big Bang theory, however , no longer could the universe be confidently considered infinite. The universe was forced to take on the properties of a finite phenomenon, possessing a history and a beginning. However, over the decades, there have been multiple interpretations of the Big Bang. In the standard interpretation of the Big Bang, which took shape in the 1960s, the formative event was not an explosion that occurred at some point in space and time — it was an explosion of space and time. In this view, time did not exist beforehand. Even for many researchers in the field, this was a bitter pill to swallow. It is hard to imagine time just starting: How does a universe decide when it is time to pop into existence?

In 2004 Sean Carroll and a graduate student of his, Jennifer Chen, came up with a much different answer to the problem of "before." In his view time's arrow, or time`s flow in only one direction, and time's beginning cannot be treated separately and are in fact cyclical: There is no way to address what came before the Big Bang until we understand why the before precedes the after. “Our universe has been evolving for 13 billion years,” Carroll says, “so it clearly did not start in equilibrium.” Rather, all the matter, energy, space, and even time in the universe must have started in a state of extraordinarily low entropy. That is the only way we could begin with a Big Bang and end up with the wonderfully diverse cosmos of today. Understand how that happened, Carroll argues, and you will understand the bigger process that brought our universe into being. In Carroll and Chen's theory, fluctuations in the dark-energy background trigger a crop of pocket universes from empty space which eventually go back to becoming empty spaces.

Yet another theory is put forth by rebel physicist Julian Barbour. According to Barbour in his 1999 book "The End of All Time," all possible configurations of the universe, every possible location of every atom, exist simultaneously. There is no past moment that flows into a future moment; the question of what came before the Big Bang never arises because Barbour's cosmology has no time. The Big Bang is not an event in the distant past; it is just one special place in a vast universe.

Ready4

     It has been estimated that over 20% of the annual gross domestic product in the United States is the result of innovation backed at some point by venture capital investors. But what is innovation? One traditional view of innovation is that it is a systematic business process occurring within an organization required to secure ongoing financial growth. But much of the most acclaimed and influential innovation has started with an individual's idea and only somewhat later followed with an organization to execute on that idea, so the organizational definition is of limited relevance.      A more practical definition of innovation is that it is the creation of anything new intended to be commercialized. Under such a definition, the efforts of a lone individual developing a radical idea and those of a department within a large company to explore a new adjacent market are both examples of innovation. This somewhat loose definition, however, fails to address explicitly what makes an innovation truly new, successful, or authentic, although it may imply that all innovation is equally valid in a sense. Otherwise, the oft-repeated challenge to uses of the term innovation may put too little emphasis on the activity and too much on its results. Quite possibly, 80% of the value of innovation has been contributed by 20% of the activity, but whether that 20% of activity could have manifested itself without a culture and economy to support the whole is less clear. In this regard, policy- and strategy-oriented attempts to refine this loose definition of innovation further are without merit.    

Ready4

     In 1905, the Supreme Court of the United States decided on the case Lochner v. New York, and in doing so overturned the Bakeshop Act, which limited the number of hours that a baker could work per day to ten. The Court ruled that the Act removed a person's right to enter freely into contracts, which it construed as provided for by the Fourteenth Amendment. The Court had previously determined through multiple rulings that the Due Process Clause, found in both the Fifth and Fourteenth Amendments, was not merely a procedural guarantee, but also a substantive limitation on the type of control the government may exercise over individuals. Lochner set a precedent against the established federal and state laws regulating working hours and wages. For example, in Adkins v. Children's Hospital, in 1923, the Court ruled that federal minimum wage legislation for women was an unconstitutional infringement of liberty of contract, as protected by due process.      Some subsequent development of human rights evolved on the basis of Lochner; for example, Adkins was a significant point in the women's rights movement in the U.S., as the legislature and justice department for decades debated whether to establish absolute equality of women or provide only special protections and regulations for them. Nevertheless, the Court overturned Adkins and undermined Lochner in deciding West Coast Hotel v. Parrish, in 1937. That ruling repudiated the idea that freedom of contract should be unrestricted and echoed, after the fact, the dissenting opinion of Justice Holmes in Adkins that there were plenty of constraints on contract, such as that against usury. At the time of West Coast Hotel, whose outcome hinged on an unexpected shift in the habits of Associate Justice Roberts, the dissenting Justice Sutherland was critical of the prospect that the interpretation of the Constitution reflected in the decision had been colored by contemporary events—ostensibly, the pressures placed upon workers by the circumstances of the ongoing Great Depression. Time has evidently judged this criticism to have been incorrect, since, while Lochner influenced a ruling whose imprint still remains, individual freedom of contract is not exempt from reasonable laws to protect worker health and safety.    

Ready4

     A myth in the ongoing debate about minimum wages is that raising minimum wages will necessarily increase a country's unemployment rate. While there are cases in which a marginal increase in wage rates might impact the operations of a company dramatically enough for the company to change its operations, in most companies, the cost increases of higher wages will tend to affect a company's bottom line without altering its staffing structure. For example, if a particular fast-food location operates at a particular time window with a staff of five people, then five must be the minimum staffing level for that business to achieve optimal results. In the case of a national fast-food chain, especially, these operational questions in general will already have been optimized. Even before rates are raised, managers of these locations have asked themselves whether they can afford to cut jobs and whether they are staffed at optimal levels (in this case, five people). A more specific calculation is needed. In this example, the precise question is how a marginal increase in staffing costs would compare to the decrease in business that would result in decreasing the staff level from five to four and serving food less quickly. The results of this analysis would not necessarily be consistent across industries, or even across markets and companies within an industry.    

Ready4

     For an online retailer, inventory represents a major source of cost. Every item of inventory represents an item that has not been sold and which therefore represents unrealized gains. Moreover, the greater the inventory a retailer must hold the anticipation of filling customer orders, the greater the amount of money it must have invested in something that cannot be used for other purposes. Two tenets of inventory management are to turn over inventory as quickly as possible and to hold the minimum amount of inventory necessary to fulfill tomorrow's orders efficiently. Those two challenges are related. If lower levels of inventory are needed, then less inventory will be on hand and it will be turned over more quickly. In competing with another online retailer, however, a company will be inclined to hold inventory of as many items as exist. If a particular type of item is not in stock at one retailer, a customer will turn to a competing retailer. Holding all of the possible stock items in stock adds to inventory cost. A solution to this issue has been previously for a company to have one central, monster-sized warehouse. Centralizing inventory allows a company to hold the widest possible range of items at the lowest necessary levels. But since customers also value speed of delivery time, the "monster warehouse model" has a flaw in that it involves shipping from a location which may not be as close to a customer's location as otherwise and which therefore would be vulnerable to a competitor who can deliver faster.      These considerations highlight the importance of information about consumer demand. At the basic level, knowing what items sell helps brick-and-mortar retailers determine what items to stock. In competitive online retail, companies with good data can stock minimum sufficient inventory levels. Even more significant, a national retailer that can forecast demand for specific items by region can move from the monster warehouse model to a system of regional warehouses, decreasing shipping time without increasing inventory costs.    

Ready4

     M. Norton Wise's examination of the calorimeter, a machine invented in the 1780s to measure heat, elucidates his theory of a role that technology plays in society outside of the applications for which it has been developed.      In the schema given to us by Thomas Kuhn, as popularly understood, cultural differences are mediated through the paradigms that underlie theories—the theories' interconnected assumptions. According to Wise's theory, however, technologies act as cultural mediators, reconciling differences among different fields of thought and study, such as chemistry, political economy, and mathematics, and also connecting ideas with realities. When Antoine-Laurent Lavoisier and Pierre-Simon de Laplace first invented the calorimeter, they thought of it in comparison to a simple physical device, the balance scale: the calorimeter balanced quantities of heat against quantities of melted ice. In fact, Lavoisier and Laplace conceived of the device somewhat differently, and in this respect the calorimeter performed mediation of the first kind. Lavoisier, who is remembered as a chemist, viewed the calorimeter as measuring a balance between chemical substances, whereas Laplace, who is remembered as a mathematician and physical astronomer, viewed the calorimeter as balancing forces.      The differing interests of Lavoisier and Laplace (who tried at least once to rid himself of the partnership in order to work on pure mathematics) caused tension. This tension between the otherwise distinct fields of chemistry and physical astronomy was resolved, in part, by the calorimeter itself; it provided a common ground to the two fields in its own concrete existence and quantitative measure, if not entirely in concept. Secondly, the calorimeter, in providing commonly accepted measurements, gave commonly accepted meanings to the ideas involved in interpreting those measurements: caloric fluid and the physical force of heat.      We are typically more inclined to view a new technological invention in the terms of Kuhn--it supports an existing paradigm, or, rarely, massively disrupts it and causes a paradigm shift. Wise would agree with Kuhn that our conception of the electron is reinforced by the television and the fiberoptic cable, but while Kuhn sees the theoretical relationship as one of champion against challenger, arbitrated through defeat and continued reigning victory, technologies per Wise arbitrate by harmonizing.    

Ready4

     The term "glass ceiling" as a discriminatory barrier limiting females from reaching senior management positions was used in the early 1990s, around the time that females first surpassed males in annual university degrees obtained in the United States. Studies of employment in various cities, such as the 2003 study of employment data in Sweden conducted by Albrecht, Bjӧrklund, and Vroman, have found a consistent gap between men's and women's wages after these are controlled for gender differences in age, education level, education field, sector, industry, and occupation. However, empirical studies as early that of Powell and Butterfield in 1994 have suggested that gender, as a job-irrelevant variable in consideration of promotions to top management positions, may actually work to women's advantage. Whereas the gender gap in pay is strongly supported by data, the glass-ceiling notion itself as a discriminatory force has been harder to account for in empirically proven terms.      Studies that have been considered by some a partial repudiation of the glass-ceiling theory have indicated that men and women differ in their preferences for competition and that such differences impact economic outcomes. If women are less likely to compete, they are less likely to enter competitive situations and hence less likely to win. For example, in a laboratory experiment featuring a non-competitive option and a competitive incentive scheme, men selected the latter twice as often as did women of equal ability. One explanation is that men are inherently more competitive; another is that the social influences limiting women's presence in executive leadership generally make their impact long before women are near the ceiling.    

Ready4

   In a recent telephone survey of over 6,000 Americans, the Pew Internet & American Life Project has concluded that African Americans' usage of Internet technology lags behind that of whites. Survey respondents who identified as African Americans trailed whites by seven percentage points in use of the Internet; 87% of whites and 80% of blacks are Internet users. Moreover, 74% of white respondents had broadband Internet access in their home, whereas 62% of black respondents had such access.      Although the Pew survey appears to draw on a representative slice of Americans using careful survey methods, its results may suffer from having answered an inherently misleading question. The survey found, for example, that black and white Internet usage and access is identical once other variables are controlled. Namely, 86% of African Americans respondents aged 18-29 were home broadband adopters, as were 88% of African American college graduates and 91% of blacks with an annual household income of $75,000 or more per year. These figures were not only well above the national average for broadband adoption, but, more to the point, they were identical to whites of similar ages, incomes, and education levels. It follows that Internet adoption has nothing to do with race per se and everything to do with some or all of the factors age, education, and household income. If Internet adoption correlates primarily with household income, as other studies of technology would suggest, then the survey in question does little more than lead us back to the fact that African Americans have a lower average household income than white Americans--a fact which has already been established. Nevertheless, the Pew study is a confirmation and a reminder of the fact that the current income difference between whites and blacks in America is having an impact on African Americans' access to technology and to the benefits that accrue from efficient access to the Internet.  

Ready4

     Birds of various species, from pigeons to swallows to larger birds, can navigate long distances on Earth, across continents and hemispheres. That they can traverse these distances thanks to a magnetic sense has been demonstrated through tests in which birds fitted with magnets have lost their navigational capability. Precisely what biological mechanism enables birds to orient in this way is still something of a mystery, however, with two theories prevailing.      One theory is that birds possess magnetic sensors in the form of grains of magnetite, which is an easily magnetized form of iron oxide. Such magnetite grains are common not only in animals but even in bacteria, where they have been established as a component enabling magnetic orientation. In the case of birds, magnetite grains are numerous in beaks, as dissections of pigeons have confirmed. Moreover, in another experiment, the trigeminal nerve, which connects the beak to the brain, was severed in reed warblers; the affected birds lost their sense of magnetic dip, which is critical to navigation.      Critics of the theory have pointed out that the abundance of grains in the beak are not concentrated, as would be expected in a sensory organ, but rather found in wandering macrophages. And while an alternative explanation for birds' sensory abilities might posit magnetite grains outside of the beak, such an explanation would be supported neither by the beak dissections nor by the tests of severed trigeminals. Critics of magnetite-centric theories suggest a second theory: that the magnetic field of the Earth has an influence on a chemical reaction in birds, specifically in a bird's retina. Experiments have demonstrated that destroying the portion of a robin's retina known as cluster N eliminates the bird's ability to detect north. Birds' eyes do not contain magnetite grains, however. Rather, some advocates of the theory that birds navigate by retinal interaction believe that a retinal protein known as cryptochrome processes magnetic information within the cluster N. Surprisingly, the mechanism by which cryptochrome could detect magnetic orientation depends on quantum mechanics: when hit by light, the cryptochrome would create a pair of particles, one of which subsequently presents information to the eye, in the form of a spot, when it is triggered a corresponding particle after that particle has traveled some distance.    

Ready4

     The World Bank has offered ethical guidance to the governments of nations in making priority-setting decisions for pharmaceutical policy. A leading point of this counsel is to respond in only limited ways to patient demands for therapies that are not cost-effective. In every healthcare system, there is a possibility, and, frankly, a reality of overspending in the course of treatment, wasting a nation's limited resources. Patients who independently finance needless treatments that create no further medical costs manifest a less problematic form of overspending, but their treatment nevertheless potentially represents economic dead weight and the diversion of limited resources that could be applied toward necessary ends. Overspending public funds is even more problematic, since public sector spending is systematic and controllable through policy. Most serious is over-medication that harms the patient or others. A leading example of such an erroneous practice is the excessive administration of antibiotics, which, in fostering antimicrobial resistance, may pose as much risk or even greater risk than under-administration of vaccines. Decreasing wasteful medical expenditures is important in the effort to the World Bank's suggested primary goal, which is to maintain a cost-effective pharmaceutical system that maximally, and equitably, improves population health.      Furthermore, the World Bank recommended, as a counterpart to these measures, efforts to improve the population's understanding of pharmaceutical uses and choices. This long-term goal is equally important and equally difficult to achieve in wealthier nations. Better public understanding helps decrease the tension between less-informed wants and well-determined needs. Culturally ingrained maxims, such as a preference for injections, do not change overnight. Furthermore, relying on brand identification can be a rational strategy for information-limited consumers worldwide. Nevertheless, moving citizens to a more informed and empowered position is an ethical obligation, as well as a strategy to reduce costs and minimize risks.  

Ready4

     Miranda and Ariel are the innermost of Uranus's five largest moons, about one seventh and one third of the size of Earth's moon, respectively. Both appear to be roughly half ice and half rock. Titania and Oberon, Uranus's two largest moons, are the farthest large moons from Uranus and also seem to be composed of about half ice and half rock. None of the five largest moons of Uranus have any detectable atmosphere or magnetic field. This contrasts sharply with the moons of Jupiter, the compositional and atmospheric characteristics of which differ based on the moon's distance from the planet.

     Despite their compositional similarities, the geographies of the moons of Uranus differ greatly. Miranda, the smallest of the five major moons, has huge canyons, some of which are 12 miles deep, a characteristic that is especially notable given that the moon itself has a diameter of only 290 miles. Miranda's entire surface is marked by massive faults, steep cliffs, smooth plains, and curiously shaped rifts; these features indicate a significant amount of recent geological activity. Ariel, on the other hand, has a few larger craters on the surface, but most are small. Many of Ariel's craters seem to be submerged to some degree. This indicates that the surface is relatively young, having been reshaped by geological movement. On Titania, a number of rift valleys, similar to those on Ariel but less extensive, crisscross the moon's surface. Titania's surface also reveals several faults; these indicate that the most recent geological activity on the moon was a very long time ago. Similarly, Oberon, the outermost of the five major moons, hasn't changed much recently. Its impact craters are more common and much bigger than the craters on Ariel and Titania, but its faults are limited to the southern hemisphere and are old enough to indicate a lack of geological activity anytime in recent history.

Ready4

When Medgar Evers applied to the then-segregated University of Mississippi Law School in February 1954, he did so at a crucial moment in American history. Three months later, in May, the Supreme Court’s landmark decision in Brown v. Board of Education struck down state-sponsored segregation, stating that "separate educational facilities are inherently unequal." The school’s refusal to admit Evers drew the interest of the NAACP and ultimately became the epicenter of its historic campaign to desegregate the school.

The Brown v. Board of Education ruling paved the way for integration and was a major victory for the civil rights movement, but the South was not ready to accept the change. The state governments of Texas, Arkansas, Florida and Alabama actively fought the decision, with some politicians physically blocking African American students’ entry into high schools and universities, moving aside only when confronted with military officers sent by the federal government to enforce the law. The entrenched racism of the South came into conflict with the rest of the country, creating a sense for African Americans that they would have to fight for the rights that had, legally, already been granted to them.

Evers was an active public figure, conducting well-publicized investigations into race-based injustices being perpetrated in the South, such as the unprosecuted murder of fourteen-year-old Emmett Till. This limelight brought numerous death threats and attempts on his life. On June 12, 1963, just hours after President John F. Kennedy's historic Civil Rights Address, Evers was shot in the back outside his home by a white supremacist. While his death was undoubtedly a tragic loss, some scholars have suggested that it galvanized the African American community, giving many members renewed motivation to carry on Evers’s crusade. His murder was a rallying cry for those who supported civil rights in the U.S., and his legacy continues to lend strength to the ongoing campaign for racial equality.

Ready4      The Smithian model of innate human disposition—as entirely self-interested, gleaning motivation only from personal profit—has long underpinned modern economic theory. One proponent of the person-as-selfish-agent model is Samantha T. Cleary, an economist who argues that this view of human nature remains the most useful basis for advancing behavioral economics and opposes more complex algorithms that take into account such complicating factors as altruism and social pressure. Economics is a purely statistical field of study concerned with a society's most common motivators, not every individual's. The field can operate most efficiently by using straightforward models that Cleary poetically describes as elegant.
     Yet how will economics strengthen its predictive powers if not by increasing the sophistication of its models, using swaths of data to inform analysis of the many processes that drive the aggregation of human economic decisions? How can behavioral economics develop without this step? If a behavior follows a any consistent pattern across a large cross section of the population, economists should be able to measure and predict it, at least theoretically. But even if an algorithm could account for complex motivations and apparently irrational decisions, Cleary argues that such an approach to studying economic behavior would remain greatly vulnerable to misinterpretation, bias, and hyperlocal preferences. Furthermore, this complex algorithm would only introduce a greater margin of error, while being so complex and situation-specific as to be useless for rendering any long-term or generalizable predictions. It might have the capacity to describe the behaviors of a small pocket of people, but couldn't contribute to broader economic theory in any meaningful way.
     Imagine, though, that an intricate algorithm were able to accurately predict behaviors based on many more inputs than self-interest. Whether the prediction confirmed or denied established economic theory, it would still contribute to the growing body of data in behavioral economics. People, unlike machines, do not operate according to simple psychological rules, and if we can account for this to some extent in our work, it is hard to envision a reason why we would not accept the challenge of doing so. Economies, after all, depend ultimately on one thing: human decision-making. Work based on the assumption that personal choices obey centuries-old economic theories is willfully limited in scope. To be aligned with the arguments of purists like Cleary, one would have to dismiss the ultimate purpose of economic study: to understand and predict the workings of real economies.
Ready4

     Cap and Trade (a government measure to incentivize the reduction of carbon emissions by business entities through financial means) has become a popular method to reduce air pollution by nations across the world, with many witnessing substantial reductions in CO2 emissions since its implementation. Carlos Sangster and Stephen Saifuddin recognize that this development is a positive one but stipulate that Cap and Trade in itself may actually increase the long-term likelihood of environmental catastrophe. Emissions trading measures, like Cap and Trade, incentivize certain companies to reduce their emissions but allow companies that can afford to do so to continue to produce emissions at historic levels; indeed, the largest companies often accept the costs of Cap and Trade measures because, despite moderately increased overhead, they can do so and still sustain record profits and growth. Moreover, there is no guarantee that short-term reductions in carbon emissions will translate into reductions in the total amount of fossil fuels used, because even into the far future, fossil fuels might easily remain more cost-effective than more environmentally-friendly alternatives. Even if government regulations were to make carbon emissions vastly more expensive, the current fossil fuel-based economic model would, were it to grow or even remain at its current size, produce more atmospheric carbon dioxide and global warming than would an economic model that put a hard cap on the amount of carbon emissions that could be produced. Sangster and Saifuddin argue that to effectively protect the planet's environmental health and encourage a more sustainable fuel economy, governments must implement measures that mandate environmentally-friendly energy sources and keep as-yet untapped fossil fuels safely underground. Unduly emphasizing halfway measures like Cap and Trade, which appeal to business entities and political interests with a vested stake in the prolonged use of fossil fuels, may distract global regulators from policies that might prove more effective in averting the disastrous effects of climate change.

Ready4

     Scientists have long known that two brain structures lying below the rostrum of the corpus callosum, called septal nuclei (SNs), play a significant role in human pleasure response. This part of the brain interacts with many other elements of the limbic system, which regulates fear expression and other forms of emotional response. Studies show that in some animals, most notably rats, electrical stimulation of SNs can motivate self-stimulation, causing them to perform such behaviors as manipulating levers or returning to regions of their housing that administer further electrical stimulation. Furthermore, connections between the SNs and portions of the brain dedicated to olfaction and memory retention have also been discovered.

     Several other neural structures have been found to play a role in governing the brain's emotional responses, however, not just the SNs of rats and humans. In fact, when laboratory rats had electrical stimulation applied to their habenular nuclei, pleasure responses shifted by 30 percent, whereas the same electrical shock applied to the SNs produced a lesser result. While scientists remain convinced that SNs play a role in the brain's regulation of fear, sadness, joy, and pleasure response, scientists now believe that other neural structures may respond more forcefully to stimulation—even if the voltage of the shock administered doesn't change—than the septal nuclei.

Ready4

     By 2014, the economy of China had outpaced that of the United States in terms of Purchasing Power Parity (PPP). But what is “PPP”? Analysts define PPP as the amount of a given currency it takes to purchase goods or services—a meal or a cell phone, for example. One understanding of PPP presents it as an indicator of the relative economic strength of two or more countries. But outside economic circumstances like cost of living confound the use of PPP as a reliable indicator of economic strength. Thus, most analysts complement PPP through the use of a number of other related metrics.

     Gross Domestic Product (GDP) is a more traditional measure of economic strength: the sum of all final goods and services produced in a country during a given period of time. By this metric, the economy of the United States still out-produces that of China by a substantial margin. However, aside from the fact that it doesn't account for trends over a longer term, this measurement doesn't address the economic well-being of entities or individuals below a national scale, both important indicators of general economic health. It takes the wealth of a national economy and extrapolates that to represent the wealth of the people it comprises. Thus, the major corporations doing very well in the United States—technology producers, financiers, etc.—are grouped with workers, families, and organizations that have access to substantially less capital. That GDP obscures instances of stark income inequality which suggests that, although a useful measure of the productivity of a national economy, its uses are limited for anyone looking to do comprehensive wealth analysis.

Ready4

     A wide variety of weather phenomena (heat and cold waves, winter storms, hurricanes, tornadoes, and flooding) display more-or-less consistent patterns of frequency and severity—but such patterns have been increasingly inconsistent during recent decades. Though natural variability plays a role in determining the course of weather systems, there is evidence that modern trends observed by scientists are driven by a unified causal factor. Isolating that causal factor, however, has proved remarkably difficult, despite substantial climatological research. The types of evidence analysis that normally yield understandings of broader climate phenomena, like glacial core sampling or the observation of tree rings, have been tricky to apply in analyzing more complex relationships between individual weather events. Moreover, climate scientists and meteorologists have to distinguish weather anomalies from weather events representative of relevant trends. To summarize, available information indicates that the effects of anthropogenic climate change, though significant, are difficult to disentangle from the welter of other variables that interact with weather phenomena.

     Nonetheless, detailed research indicates that the primary causal factor behind weather trends may be anthropogenic climate change. Recent decades have seen increases in the number of days yearly that reached record high temperatures and decreases in the number of days yearly that reached record lows, but instances of record-breaking winter precipitation (snow) have increased as temperatures have risen. This information was once seen as contradicting established models of climate change, but scientists have explained it through a new hypothesis that accounts for the amount of moisture in the atmosphere. Warm periods of increased duration are postulated to cause increased winter precipitation if, evaporating vast quantities of water into the air, they dry some areas while weather systems bring the evaporated moisture to others. Once the moisture reaches colder climates, it condenses, and falls back to earth as rain or snow. As more regions become warmer for longer periods, the amount of airborne water increases, as do instances of extreme rain and snow. Thus, seemingly minor changes in global temperature can have significant effects on the frequency and severity of weather phenomena in every season and environment.

     What most recommends this theory is its ultimate simplicity. Though it may seem that droughts in some areas are unrelated to periods of increased precipitation in others, climate change is responsible for an environmental continuum that produces both. Increasing temperatures worldwide is the one cause that can explain all of these disparate phenomena.

Ready4

     Researchers have attempted to explain the withdrawal of many healthy adult men from the American workforce through traditional economics frameworks, such as overseas competition and increased automation of traditional blue-collar jobs, which are often cheaper options for companies looking to save money on labor. However, detailed investigation indicates that increased offshoring and automation fail to fully explain this drop in labor participation. Instead, this investigation yields data that suggest only a multiplicity of causes can wholly account for widespread male retreat from the workforce. One cause is criminal records: nearly one in eight American men possesses a criminal history, and that disqualifies candidates in the eyes of most potential employers. Another cause is low educational attainment, which diminishes access to an increasingly technical and information-based economy. In light of this investigation, which indicates that multiple social forces have converged to exclude a major swath of the potential working population of the United States, the government should take action to improve poor outcomes for these “missing men.” Legislators should take action to lessen the stigma of criminal convictions, advocate programs proven to reduce recidivism, increase financial support for young people looking to pursue higher education, and provide specialized training for young men who express an interest in the technology-driven fields of the new economy.

Ready4

     In 1905 Max Weber published The Protestant Ethic and the Spirit of Capitalism, one of the foundational texts of modern sociology and of economic sociology, in particular, and also one of the first to take an interpretive approach to human participation in social and economic systems. His treatise documents the approaches that peoples of various religious beliefs have taken to labor, leisure, wages, and the moral significance of worldly professions since the advent of the industrial revolution and the development of global capitalism. Weber's work theorizes that the development of novel economic systems is contingent on the Protestant, as opposed to the Catholic, conception of work as inherently virtuous (wherein labor is seen as a theological and an economic imperative).

     Weber's work can be seen as an early criticism of Karl Marx's entirely materialistic conception of sociological progress, in that he suggests that religious conviction contributed to the development of economic systems and not the other way around. Moreover, Weber was admirably modest in his claims, acknowledging that his was not the definitive understanding of economic development. However, because of the ways social science has developed in the intervening century, Weber's work is more notable as a spur to further inquiry than as an enduring source of scientific truth. Additionally, Weber's thinking was hamstrung by the immaturity of sociological methodology in his time: statistical analysis, data collection, and mathematical economics were all in their theoretical infancy at the time of his writing. Given The Protestant Ethic's scientific obsolescence, it serves as an enduring example of how sociological analysis has changed since its inception and how it continues to do so.

Ready4

     The works of two nineteenth century thinkers promote conflicting theories of the locus of responsibility for the course of historical events. Thomas Carlyle, in his 1841 treatise On Heroes, Hero-Worship, and the Heroic in History, places little emphasis on the events or conditions that produce major figures or the environments that allow them to rise to prominence. Instead, Carlyle posits that the extraordinary charisma, intelligence, wisdom, or political skill of individual “great” figures, invariably men, are the primary means by which social progress is effected. Herbert Spencer (1820–1903), though, writes not only that social environments are responsible for any great figures societies produce, but that Carlyle's approach is puerile and “unscientific” in the vein of many popular sociological works of the era. Although both thinkers promote a theory attempting to isolate the “mechanisms” of history vis-à-vis individual figures, only Spencer's has survived recent criticism largely intact. He emphasizes the lesser-understood contingencies of progress that comprise the immense majority of sociohistorical phenomena. He concludes that while major figures often take credit for the causal chain of significant events, the individuals themselves are less directly responsible for them than is commonly believed. This generality demonstrates how Spencer laid the foundation for twentieth-century historical scholarship, which holds to the belief that historical events, even those led by “heroes,” follow from multitudinous and sometimes untraceable social preconditions.

  • ‹
  • 1
  • 2
  • ...
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • ...
  • 723
  • 724
  • ›