|
C13
|
<p class="ng-scope"> The release of accumulated stress at a particular point during an earthquake alters the shear and normal stresses along the tectonic plate boundary and surrounding fault lines. According to geophysical theory, the Coulomb Failure Stress Change, which is the estimated alteration and resultant transfer of shear and normal stresses along a plate margin, is a function of change in shear stress along a fault and secondarily the change in normal stress along a fault and the change in pore pressure in the fault zone, the latter two factors scaling according to the friction coefficient characteristic to the plate margin. By measuring and calculating the stress transfer following seismic activity, it is possible to subsequently construct basic contour maps of regions where there have been positive stress changes and are therefore of higher risk of being potential epicenters of future large earthquakes. Calculations have revealed that when an earthquake occurs, approximately 80% of the energy is released as seismic waves, whereas the remaining 20% is stored and transferred to different locations along the fault, making those specific regions more susceptible to future earthquakes. Predicting earthquakes by using the theory of stress transfer has important potential applications. The main rival technique for forecasting, the statistical analysis of patterns in seismic activity, suffers from a contradiction. Foreshocks are deemed evidence of the potential for a future high-magnitude earthquake, but the lack of foreshocks along faults known to be active has been considered an equally plausible potential precursor for large events. The stress transfer theory has been used to predict the location of a magnitude 7.4 earthquake that occurred two years later in the port city of Izmit, Turkey, killing more than 30,000 people. A limitation of the theory as currently applied, due to insufficient understanding of plate kinematics, is that refining predictions with temporal constraints appears to be far more problematic; the team that gave the Izmit prediction had forecast an event near the city within thirty years. p>
|
|
C13
|
<p class="ng-scope"> For an online retailer, inventory represents a major source of cost. Every item of inventory represents an item that has not been sold and which therefore represents unrealized gains. Moreover, the greater the inventory a retailer must hold the anticipation of filling customer orders, the greater the amount of money it must have invested in something that cannot be used for other purposes. Two tenets of inventory management are to turn over inventory as quickly as possible and to hold the minimum amount of inventory necessary to fulfill tomorrow's orders efficiently. Those two challenges are related. If lower levels of inventory are needed, then less inventory will be on hand and it will be turned over more quickly. In competing with another online retailer, however, a company will be inclined to hold inventory of as many items as exist. If a particular type of item is not in stock at one retailer, a customer will turn to a competing retailer. Holding all of the possible stock items in stock adds to inventory cost. A solution to this issue has been previously for a company to have one central, monster-sized warehouse. Centralizing inventory allows a company to hold the widest possible range of items at the lowest necessary levels. But since customers also value speed of delivery time, the "monster warehouse model" has a flaw in that it involves shipping from a location which may not be as close to a customer's location as otherwise and which therefore would be vulnerable to a competitor who can deliver faster.
These considerations highlight the importance of information about consumer demand. At the basic level, knowing what items sell helps brick-and-mortar retailers determine what items to stock. In competitive online retail, companies with good data can stock minimum sufficient inventory levels. Even more significant, a national retailer that can forecast demand for specific items by region can move from the monster warehouse model to a system of regional warehouses, decreasing shipping time without increasing inventory costs. p>
|
|
C13
|
<p class="ng-scope" zoompage-fontsize="14"> Yawning is a reflex consisting of the simultaneous intake of air and reflexive stretching of the eardrums, followed by an exhalation of breath. There are two leading theories of the purpose of yawning, both in humans and other animals. Because yawning is common to most vertebrates, biologists assume that it plays an important role in survival. Two competing theories, dubbed “A” and “B,” seek to explain how.
Supporters of theory A argue that the primary purpose of yawning is to keep the brain cool. The human brain is quite sensitive to even small temperature increases: our reaction times increase and our recall is diminished when the temperature of the brain differs a few tenths of a degree from the perfect temperature of 98.6° F. The proponents of theory A point out that, in terms of escaping from a predator, these tiny temperature changes in the brain could easily be the difference between life and death.
However, critics of theory A argue that yawning is not more common in warmer climates, and that the body has much more sophisticated methods of maintaining the optimal temperature in the brain—the circulatory system, for one example, and sweating, for another. They advocate theory B, which claims that yawning plays a primarily social role based on the fact that yawning is “contagious.”
Because yawning is so demonstrative and affects the body so little, say supporters of theory B, the reflex is most likely a social mode of communication that happens to have some slight physiological effects. The contagiousness of yawning has been shown to be stronger among group members who feel closer to each other, implying that it has a major social component. Based on this information, theory B claims that the primary purpose of yawning is to communicate an increased need for alertness throughout a group. This alertness, according to theory B, is only slightly encouraged by the yawning itself; the real benefit of the contagious yawn is that the yawning animal is reminded to stay alert to the other members of the group and to the surroundings.p>
|
|
Ready4
|
<p class="ng-scope"> For an online retailer, inventory represents a major source of cost. Every item of inventory represents an item that has not been sold and which therefore represents unrealized gains. Moreover, the greater the inventory a retailer must hold the anticipation of filling customer orders, the greater the amount of money it must have invested in something that cannot be used for other purposes. Two tenets of inventory management are to turn over inventory as quickly as possible and to hold the minimum amount of inventory necessary to fulfill tomorrow's orders efficiently. Those two challenges are related. If lower levels of inventory are needed, then less inventory will be on hand and it will be turned over more quickly. In competing with another online retailer, however, a company will be inclined to hold inventory of as many items as exist. If a particular type of item is not in stock at one retailer, a customer will turn to a competing retailer. Holding all of the possible stock items in stock adds to inventory cost. A solution to this issue has been previously for a company to have one central, monster-sized warehouse. Centralizing inventory allows a company to hold the widest possible range of items at the lowest necessary levels. But since customers also value speed of delivery time, the "monster warehouse model" has a flaw in that it involves shipping from a location which may not be as close to a customer's location as otherwise and which therefore would be vulnerable to a competitor who can deliver faster.
These considerations highlight the importance of information about consumer demand. At the basic level, knowing what items sell helps brick-and-mortar retailers determine what items to stock. In competitive online retail, companies with good data can stock minimum sufficient inventory levels. Even more significant, a national retailer that can forecast demand for specific items by region can move from the monster warehouse model to a system of regional warehouses, decreasing shipping time without increasing inventory costs. p>
|
|
Ready4
|
<p class="ng-scope"> The World Bank has offered ethical guidance to the governments of nations in making priority-setting decisions for pharmaceutical policy. A leading point of this counsel is to respond in only limited ways to patient demands for therapies that are not cost-effective. In every healthcare system, there is a possibility, and, frankly, a reality of overspending in the course of treatment, wasting a nation's limited resources. Patients who independently finance needless treatments that create no further medical costs manifest a less problematic form of overspending, but their treatment nevertheless potentially represents economic dead weight and the diversion of limited resources that could be applied toward necessary ends. Overspending public funds is even more problematic, since public sector spending is systematic and controllable through policy. Most serious is over-medication that harms the patient or others. A leading example of such an erroneous practice is the excessive administration of antibiotics, which, in fostering antimicrobial resistance, may pose as much risk or even greater risk than under-administration of vaccines. Decreasing wasteful medical expenditures is important in the effort to the World Bank's suggested primary goal, which is to maintain a cost-effective pharmaceutical system that maximally, and equitably, improves population health.
Furthermore, the World Bank recommended, as a counterpart to these measures, efforts to improve the population's understanding of pharmaceutical uses and choices. This long-term goal is equally important and equally difficult to achieve in wealthier nations. Better public understanding helps decrease the tension between less-informed wants and well-determined needs. Culturally ingrained maxims, such as a preference for injections, do not change overnight. Furthermore, relying on brand identification can be a rational strategy for information-limited consumers worldwide. Nevertheless, moving citizens to a more informed and empowered position is an ethical obligation, as well as a strategy to reduce costs and minimize risks. p>
|
|
C13
|
<p class="ng-scope"> The World Bank has offered ethical guidance to the governments of nations in making priority-setting decisions for pharmaceutical policy. A leading point of this counsel is to respond in only limited ways to patient demands for therapies that are not cost-effective. In every healthcare system, there is a possibility, and, frankly, a reality of overspending in the course of treatment, wasting a nation's limited resources. Patients who independently finance needless treatments that create no further medical costs manifest a less problematic form of overspending, but their treatment nevertheless potentially represents economic dead weight and the diversion of limited resources that could be applied toward necessary ends. Overspending public funds is even more problematic, since public sector spending is systematic and controllable through policy. Most serious is over-medication that harms the patient or others. A leading example of such an erroneous practice is the excessive administration of antibiotics, which, in fostering antimicrobial resistance, may pose as much risk or even greater risk than under-administration of vaccines. Decreasing wasteful medical expenditures is important in the effort to the World Bank's suggested primary goal, which is to maintain a cost-effective pharmaceutical system that maximally, and equitably, improves population health.
Furthermore, the World Bank recommended, as a counterpart to these measures, efforts to improve the population's understanding of pharmaceutical uses and choices. This long-term goal is equally important and equally difficult to achieve in wealthier nations. Better public understanding helps decrease the tension between less-informed wants and well-determined needs. Culturally ingrained maxims, such as a preference for injections, do not change overnight. Furthermore, relying on brand identification can be a rational strategy for information-limited consumers worldwide. Nevertheless, moving citizens to a more informed and empowered position is an ethical obligation, as well as a strategy to reduce costs and minimize risks. p>
|
|
Ready4
|
<p> The diversity of species in bacterial communities is often studied by phenotypic characterization. A problem with this method is that phenotypic methods can be used only on bacteria which can be isolated and cultured, and most soil bacteria that have been observed by fluorescence microscope cannot be isolated and cultured. DNA can be isolated from bacteria in soil to obtain genetic information about the nonculturable bacteria therein. The heterogeneity of this DNA is a measure of the total number of genetically different bacteria, or the number of species. DNA heterogeneity can be determined by thermal denaturation and reassociation. In general, renaturation of homologous single-stranded DNA follows second-order reaction kinetics. In other words, the fraction of DNA that has renatured within a given time period is proportional to the genome size or the complexity of DNA, defined as the number of nucleotides in the DNA of a haploid cell, without repetitive DNA. The genetic diversity of a bacterial community can be inferred in a similar manner. Vigdis Torsvik, Jostein Goksyr, and Frida Lise Daae used this process to analyze soil samples taken from the soil from a beech forest north of Bergen, Norway. The reassociation curves for the main DNA fraction did not follow ideal second-order reaction kinetics, so the half-life values gave only approximate, underestimated values for the number of genomes present. Nevertheless, the soil bacterium DNA was very heterogeneous; the diversity corresponded to about 4,000 distinct genomes of a size typical of standard soil bacteria. This diversity was about 200 times as many species as could have been isolated and cultured. Various procedures for isolating DNA from river sediments and seawater are known. This opens up the possibility of applying the thermal denaturation method to systems other than soil. The results of the Norway study indicated that the genetic diversity of the total bacterial community in a deciduous-forest soil is so high that heterogeneity can be determined only approximately. In environments with pollution or extreme conditions, the genetic diversity might be easier to determine precisely. p>
|
|
Ready4
|
<p class="ng-scope" zoompage-fontsize="14"> Yawning is a reflex consisting of the simultaneous intake of air and reflexive stretching of the eardrums, followed by an exhalation of breath. There are two leading theories of the purpose of yawning, both in humans and other animals. Because yawning is common to most vertebrates, biologists assume that it plays an important role in survival. Two competing theories, dubbed “A” and “B,” seek to explain how.
Supporters of theory A argue that the primary purpose of yawning is to keep the brain cool. The human brain is quite sensitive to even small temperature increases: our reaction times increase and our recall is diminished when the temperature of the brain differs a few tenths of a degree from the perfect temperature of 98.6° F. The proponents of theory A point out that, in terms of escaping from a predator, these tiny temperature changes in the brain could easily be the difference between life and death.
However, critics of theory A argue that yawning is not more common in warmer climates, and that the body has much more sophisticated methods of maintaining the optimal temperature in the brain—the circulatory system, for one example, and sweating, for another. They advocate theory B, which claims that yawning plays a primarily social role based on the fact that yawning is “contagious.”
Because yawning is so demonstrative and affects the body so little, say supporters of theory B, the reflex is most likely a social mode of communication that happens to have some slight physiological effects. The contagiousness of yawning has been shown to be stronger among group members who feel closer to each other, implying that it has a major social component. Based on this information, theory B claims that the primary purpose of yawning is to communicate an increased need for alertness throughout a group. This alertness, according to theory B, is only slightly encouraged by the yawning itself; the real benefit of the contagious yawn is that the yawning animal is reminded to stay alert to the other members of the group and to the surroundings.p>
|
|
Ready4
|
<p style="text-align: center;"> p><p>The dots on the graph above indicate the semester test averages and final exam grades for 40 students in Ms. Joshi's geometry class. How many students had a test average above a 90 and also scored above an 80 on the final exam?p>
|
|
C13
|
<p class="ng-scope"> Birds of various species, from pigeons to swallows to larger birds, can navigate long distances on Earth, across continents and hemispheres. That they can traverse these distances thanks to a magnetic sense has been demonstrated through tests in which birds fitted with magnets have lost their navigational capability. Precisely what biological mechanism enables birds to orient in this way is still something of a mystery, however, with two theories prevailing.
One theory is that birds possess magnetic sensors in the form of grains of magnetite, which is an easily magnetized form of iron oxide. Such magnetite grains are common not only in animals but even in bacteria, where they have been established as a component enabling magnetic orientation. In the case of birds, magnetite grains are numerous in beaks, as dissections of pigeons have confirmed. Moreover, in another experiment, the trigeminal nerve, which connects the beak to the brain, was severed in reed warblers; the affected birds lost their sense of magnetic dip, which is critical to navigation.
Critics of the theory have pointed out that the abundance of grains in the beak are not concentrated, as would be expected in a sensory organ, but rather found in wandering macrophages. And while an alternative explanation for birds' sensory abilities might posit magnetite grains outside of the beak, such an explanation would be supported neither by the beak dissections nor by the tests of severed trigeminals. Critics of magnetite-centric theories suggest a second theory: that the magnetic field of the Earth has an influence on a chemical reaction in birds, specifically in a bird's retina. Experiments have demonstrated that destroying the portion of a robin's retina known as cluster N eliminates the bird's ability to detect north. Birds' eyes do not contain magnetite grains, however. Rather, some advocates of the theory that birds navigate by retinal interaction believe that a retinal protein known as cryptochrome processes magnetic information within the cluster N. Surprisingly, the mechanism by which cryptochrome could detect magnetic orientation depends on quantum mechanics: when hit by light, the cryptochrome would create a pair of particles, one of which subsequently presents information to the eye, in the form of a spot, when it is triggered a corresponding particle after that particle has traveled some distance. p>
|
|
Ready4
|
<p class="ng-scope"> The outsourcing of production factories to locations overseas from companies' home countries has been a hallmark practice of multinational brands since the 1990s and is lauded by some economists as advancing the well-being of people in both the home country and the production country. However, not all of the benefits attributed to this globalization practice necessarily accrue, and there are concerns about outsourcing that are not readily addressed within the formulations of economic theory. First, a home company that separates its brand and its product as completely as possible and places the brand as paramount hardly sends a message that product quality is central to its operations; more likely, all of its innovation attempts will focus on branding, and such a company will settle with a product that is merely (and maybe barely) good enough. Dismissing this point, economists may cite the law of comparative advantage: outsourcing allows both companies involved to pursue greater profit and well-being according to their capabilities. Specifically, workers in the companies of manufacture should be paid more than they would be paid otherwise, even if they are paid less than factory workers in the original country; meanwhile, workers in the home country should be pushed to increase their skills and education and move to higher-skill jobs that are less available in the country of manufacture. Whether displaced workers in the home country acquire skills and make this shift in any reasonable timeframe is hardly demonstrated, however, and while outsourcing may create value by lowering costs, it has been asserted that workers in the countries of production are making no more after outsourcing than previously and hence in effect are enjoying none of the new profit. The CEO of one outsourcing company, when pressed on this point by a reporter, explained that, as the employees of those factories were not employees of his company, he could not be responsible for them. He asked the reporter whether journalists should be expected to know, and be responsible for, the manufacturing conditions of the paper on which their articles are printed. This comment, as much as it defends corporations, highlights the broadest form of worry about outsourcing: in global supply chains with increasingly distant and opaque connections, responsibility is too easy to shirk and maybe even impossible to determine. p>
|
|
C13
|
<p class="ng-scope"> The outsourcing of production factories to locations overseas from companies' home countries has been a hallmark practice of multinational brands since the 1990s and is lauded by some economists as advancing the well-being of people in both the home country and the production country. However, not all of the benefits attributed to this globalization practice necessarily accrue, and there are concerns about outsourcing that are not readily addressed within the formulations of economic theory. First, a home company that separates its brand and its product as completely as possible and places the brand as paramount hardly sends a message that product quality is central to its operations; more likely, all of its innovation attempts will focus on branding, and such a company will settle with a product that is merely (and maybe barely) good enough. Dismissing this point, economists may cite the law of comparative advantage: outsourcing allows both companies involved to pursue greater profit and well-being according to their capabilities. Specifically, workers in the companies of manufacture should be paid more than they would be paid otherwise, even if they are paid less than factory workers in the original country; meanwhile, workers in the home country should be pushed to increase their skills and education and move to higher-skill jobs that are less available in the country of manufacture. Whether displaced workers in the home country acquire skills and make this shift in any reasonable timeframe is hardly demonstrated, however, and while outsourcing may create value by lowering costs, it has been asserted that workers in the countries of production are making no more after outsourcing than previously and hence in effect are enjoying none of the new profit. The CEO of one outsourcing company, when pressed on this point by a reporter, explained that, as the employees of those factories were not employees of his company, he could not be responsible for them. He asked the reporter whether journalists should be expected to know, and be responsible for, the manufacturing conditions of the paper on which their articles are printed. This comment, as much as it defends corporations, highlights the broadest form of worry about outsourcing: in global supply chains with increasingly distant and opaque connections, responsibility is too easy to shirk and maybe even impossible to determine. p>
|
|
Ready4
|
<p class="ng-scope"> M. Norton Wise's examination of the calorimeter, a machine invented in the 1780s to measure heat, elucidates his theory of a role that technology plays in society outside of the applications for which it has been developed.
In the schema given to us by Thomas Kuhn, as popularly understood, cultural differences are mediated through the paradigms that underlie theories—the theories' interconnected assumptions. According to Wise's theory, however, technologies act as cultural mediators, reconciling differences among different fields of thought and study, such as chemistry, political economy, and mathematics, and also connecting ideas with realities. When Antoine-Laurent Lavoisier and Pierre-Simon de Laplace first invented the calorimeter, they thought of it in comparison to a simple physical device, the balance scale: the calorimeter balanced quantities of heat against quantities of melted ice. In fact, Lavoisier and Laplace conceived of the device somewhat differently, and in this respect the calorimeter performed mediation of the first kind. Lavoisier, who is remembered as a chemist, viewed the calorimeter as measuring a balance between chemical substances, whereas Laplace, who is remembered as a mathematician and physical astronomer, viewed the calorimeter as balancing forces.
The differing interests of Lavoisier and Laplace (who tried at least once to rid himself of the partnership in order to work on pure mathematics) caused tension. This tension between the otherwise distinct fields of chemistry and physical astronomy was resolved, in part, by the calorimeter itself; it provided a common ground to the two fields in its own concrete existence and quantitative measure, if not entirely in concept. Secondly, the calorimeter, in providing commonly accepted measurements, gave commonly accepted meanings to the ideas involved in interpreting those measurements: caloric fluid and the physical force of heat.
We are typically more inclined to view a new technological invention in the terms of Kuhn--it supports an existing paradigm, or, rarely, massively disrupts it and causes a paradigm shift. Wise would agree with Kuhn that our conception of the electron is reinforced by the television and the fiberoptic cable, but while Kuhn sees the theoretical relationship as one of champion against challenger, arbitrated through defeat and continued reigning victory, technologies per Wise arbitrate by harmonizing. p>
|
|
Ready4
|
<p class="ng-scope"> Birds of various species, from pigeons to swallows to larger birds, can navigate long distances on Earth, across continents and hemispheres. That they can traverse these distances thanks to a magnetic sense has been demonstrated through tests in which birds fitted with magnets have lost their navigational capability. Precisely what biological mechanism enables birds to orient in this way is still something of a mystery, however, with two theories prevailing.
One theory is that birds possess magnetic sensors in the form of grains of magnetite, which is an easily magnetized form of iron oxide. Such magnetite grains are common not only in animals but even in bacteria, where they have been established as a component enabling magnetic orientation. In the case of birds, magnetite grains are numerous in beaks, as dissections of pigeons have confirmed. Moreover, in another experiment, the trigeminal nerve, which connects the beak to the brain, was severed in reed warblers; the affected birds lost their sense of magnetic dip, which is critical to navigation.
Critics of the theory have pointed out that the abundance of grains in the beak are not concentrated, as would be expected in a sensory organ, but rather found in wandering macrophages. And while an alternative explanation for birds' sensory abilities might posit magnetite grains outside of the beak, such an explanation would be supported neither by the beak dissections nor by the tests of severed trigeminals. Critics of magnetite-centric theories suggest a second theory: that the magnetic field of the Earth has an influence on a chemical reaction in birds, specifically in a bird's retina. Experiments have demonstrated that destroying the portion of a robin's retina known as cluster N eliminates the bird's ability to detect north. Birds' eyes do not contain magnetite grains, however. Rather, some advocates of the theory that birds navigate by retinal interaction believe that a retinal protein known as cryptochrome processes magnetic information within the cluster N. Surprisingly, the mechanism by which cryptochrome could detect magnetic orientation depends on quantum mechanics: when hit by light, the cryptochrome would create a pair of particles, one of which subsequently presents information to the eye, in the form of a spot, when it is triggered a corresponding particle after that particle has traveled some distance. p>
|
|
C13
|
<p class="ng-scope"> M. Norton Wise's examination of the calorimeter, a machine invented in the 1780s to measure heat, elucidates his theory of a role that technology plays in society outside of the applications for which it has been developed.
In the schema given to us by Thomas Kuhn, as popularly understood, cultural differences are mediated through the paradigms that underlie theories—the theories' interconnected assumptions. According to Wise's theory, however, technologies act as cultural mediators, reconciling differences among different fields of thought and study, such as chemistry, political economy, and mathematics, and also connecting ideas with realities. When Antoine-Laurent Lavoisier and Pierre-Simon de Laplace first invented the calorimeter, they thought of it in comparison to a simple physical device, the balance scale: the calorimeter balanced quantities of heat against quantities of melted ice. In fact, Lavoisier and Laplace conceived of the device somewhat differently, and in this respect the calorimeter performed mediation of the first kind. Lavoisier, who is remembered as a chemist, viewed the calorimeter as measuring a balance between chemical substances, whereas Laplace, who is remembered as a mathematician and physical astronomer, viewed the calorimeter as balancing forces.
The differing interests of Lavoisier and Laplace (who tried at least once to rid himself of the partnership in order to work on pure mathematics) caused tension. This tension between the otherwise distinct fields of chemistry and physical astronomy was resolved, in part, by the calorimeter itself; it provided a common ground to the two fields in its own concrete existence and quantitative measure, if not entirely in concept. Secondly, the calorimeter, in providing commonly accepted measurements, gave commonly accepted meanings to the ideas involved in interpreting those measurements: caloric fluid and the physical force of heat.
We are typically more inclined to view a new technological invention in the terms of Kuhn--it supports an existing paradigm, or, rarely, massively disrupts it and causes a paradigm shift. Wise would agree with Kuhn that our conception of the electron is reinforced by the television and the fiberoptic cable, but while Kuhn sees the theoretical relationship as one of champion against challenger, arbitrated through defeat and continued reigning victory, technologies per Wise arbitrate by harmonizing. p>
|
|
C13
|
<p class="ng-scope" zoompage-fontsize="14"> Physical theory implies that the existence of astronomical entities above a certain mass is evidence for the existence of black holes. The Earth does not itself collapse upon itself under gravitational force because gravity is countered by the outward pressure generated by the electromagnetic repulsion between the atoms making up the planet. But if these forces are overpowered, gravity will always lead to the formation of a black hole. Assuming the validity of general relativity, we can calculate the upper bound for a star, the Tolman-Oppenheimer-Volkoff limit, to be 3.6 solar masses; any object heavier than this will be unable to resist collapse under its own mass and must be a black hole. The search for entities more massive than the Tolman-Oppenheimer-Volkoff limit brings us to the examination of X-ray binary systems. In an X-ray binary, two bodies rotate around their center of mass, a point between them, while one component, usually a normal star, sheds matter to the other more massive component known as the accretor. The shedding matter is released as observable X-ray radiation. Since binary stars rotate around a common center of gravity, the mass of the accetor can be calculated from the orbit of the visible one. By 2004, about forty X-ray binaries that contained candidates for black holes had been discovered. The accretors in these binary systems did not appear visible, as is to be expected of black holes, but that fact alone does not distinguish them from very dense and hence less luminescent stars, such as neutron stars. More to the point is that these accretors were of mass far in excess of 3.6 solar masses. Famously, Cygnus X-1, an X-ray binary in the constellation Cygnus, has an accretor whose mass has been calculated to be 14 solar masses, plus or minus 4 solar masses. While it does not rule out other phenomena without further interpretation, it provides strong proof that black holes exist. The conclusion that black holes exist depends on the reliability of the general-relativistic calculations involved. If more generous assumptions are made, the Tolman-Oppenheimer-Volkoff limit can be calculated to be as high as 10 solar masses. The finding also establishes plausibility, if not direct evidence, for the existence of supermassive black holes hypothesized to exist at the center of some galaxies. p>
|
|
Ready4
|
<p class="ng-scope" zoompage-fontsize="14"> Physical theory implies that the existence of astronomical entities above a certain mass is evidence for the existence of black holes. The Earth does not itself collapse upon itself under gravitational force because gravity is countered by the outward pressure generated by the electromagnetic repulsion between the atoms making up the planet. But if these forces are overpowered, gravity will always lead to the formation of a black hole. Assuming the validity of general relativity, we can calculate the upper bound for a star, the Tolman-Oppenheimer-Volkoff limit, to be 3.6 solar masses; any object heavier than this will be unable to resist collapse under its own mass and must be a black hole. The search for entities more massive than the Tolman-Oppenheimer-Volkoff limit brings us to the examination of X-ray binary systems. In an X-ray binary, two bodies rotate around their center of mass, a point between them, while one component, usually a normal star, sheds matter to the other more massive component known as the accretor. The shedding matter is released as observable X-ray radiation. Since binary stars rotate around a common center of gravity, the mass of the accetor can be calculated from the orbit of the visible one. By 2004, about forty X-ray binaries that contained candidates for black holes had been discovered. The accretors in these binary systems did not appear visible, as is to be expected of black holes, but that fact alone does not distinguish them from very dense and hence less luminescent stars, such as neutron stars. More to the point is that these accretors were of mass far in excess of 3.6 solar masses. Famously, Cygnus X-1, an X-ray binary in the constellation Cygnus, has an accretor whose mass has been calculated to be 14 solar masses, plus or minus 4 solar masses. While it does not rule out other phenomena without further interpretation, it provides strong proof that black holes exist. The conclusion that black holes exist depends on the reliability of the general-relativistic calculations involved. If more generous assumptions are made, the Tolman-Oppenheimer-Volkoff limit can be calculated to be as high as 10 solar masses. The finding also establishes plausibility, if not direct evidence, for the existence of supermassive black holes hypothesized to exist at the center of some galaxies. p>
|
|
Ready4
|
<p> p>
<p>Last year, a company sold six types of products, each a telephony product or a home appliance. The table above shows the annual revenue from each product as a fraction of the company's total annual revenue. If the company's telephony products generated revenue in excess of of the total annual revenue, is Product 3 a telephony product?p>
- Product 1 and Product 6 are telephony products.
- Product 2 and Product 5 are not telephony products.
|
|
|
According to P. F. Drucker, the management philosophy known as Total Quality Management (TQM), which is designed to be adopted consistently throughout an organization and to improve customer service by using sampling theory to reduce the variability of a product's quality, can work successfully in conjunction with two older management systems. As Drucker notes, TQM's scientific approach is consistent with the statistical sampling techniques of the "rationalist" school of scientific management, and the organizational structure associated with TQM is consistent with the social and psychological emphases of the "human relations" school of management.However, TQM cannot simply be grafted onto these systems or onto certain other non-TQM management systems. Although, as Drucker contends, TQM shares with such systems the ultimate objective of increasing profitability, TQM requires fundamentally different strategies. While the other management systems referred to use upper management decision-making and employee specialization to maximize shareholder profits over the short term, TQM envisions the interests of employees, shareholders, and customers as convergent. For example, lower prices not only benefit consumers but also enhance an organization's competitive edge and ensure its continuance, thus benefiting employees and owners. TQM's emphasis on shared interests is reflected in the decentralized decision-making, integrated production activity, and lateral structure of organizations that achieve the benefits of TQM.
|
|
Ready4
|
<p class="ng-scope"> A prototypical nanoparticle is produced by chemical synthesis, then coated with polymers, drugs, fluorophores, peptides, proteins, or oligonucleotides, and eventually administered into cell cultures. Nanoparticles were conceived of as benign carriers, but multiple studies have demonstrated that their design influences cell uptake, gene expression, and toxicity.
More specifically, interactions between nanoparticle-bound ligands (the molecules that bind to nanoparticles) and cellular receptors depend on the engineered geometry and the ligand density of a nanomaterial. The nanoparticle acts as a scaffold whose design dictates the number of ligands that interact with the receptor target. A multivalent effect occurs when multiple ligands on the nanoparticle interact with multiple receptors on the cell. The binding strength of complexed ligands is more than the sum of individual affinities; the accumulated effect of multiple affinities is known as the avidity for the entire complex.
This phenomenon is illustrated by the binding affinity of the antibody trastuzumab to the ErbB2 receptor, a protein whose overexpression has been shown to play an important role in the development and progression of certain aggressive types of breast cancer. Trastuzumab's binding affinity to ErbB2 when liganded to a nanoparticle increases proportionally with the size of a nanoparticle, owing to a higher density of the ligand on the nanoparticle surface. However, when viewed in terms of the downstream signaling via the ErbB2 receptor, mid-sized gold nanoparticles induced the strongest effect, suggesting that factors beyond binding affinity must be considered.
Furthermore, several studies have shown that nanoparticle design can generate effects not obtained simply by a free ligand in solution. For example, the aforementioned mid-sized trastuzumab-coated gold nanoparticles altered cellular apoptosis, the process of programmed cell death, by influencing the activation of the so-called "executioner" caspase enzymes. Similarly, receptor-specific peptides improved their ability to induce angiogenesis, the physiological process through which new blood vessels form from preexisting vessels, when these peptides are conjugated to a nanoparticle surface. Specifically, presentation of the peptide on a structured scaffold increased angiogenesis, which is dependent on receptor-mediated signaling. These findings highlight the advantages of a ligand bound to a nanoparticle over one free in solution. The nanoparticle surface creates a region of highly concentrated ligand, which increases avidity and, potentially, alters cell signaling.p>
|