Friday, December 27, 2019

The interference of stroop effect on colours and words - Free Essay Example

Sample details Pages: 6 Words: 1696 Downloads: 9 Date added: 2019/10/10 Did you like this example? ABSTRACT The major focus of this experiment was to study and run an investigation how changing or matching the font color of a given stimulus word towards the color named by the word would affect the time to react when identifying the font color of these words. This is called the stoop effect one of the fundamental phenomenon that is commonly used in cognitive psychology. In other words this experiment would purposely investigate the existing difference in the time taken to react towards either incongruent conditions or congruent conditions. Don’t waste time! Our writers will create an original "The interference of stroop effect on colours and words" essay for you Create order The participants were required to identify the color of the words present not paying any attention to the actual word. However the time taken by the participant to give a response at every step was recorded in ms. There after a hypothetical test using the t-test method was carried on the collected data to proof the fact that the reaction time during the congruent condition was actually faster. INSTRUCTION The concept of Stroop effect was effective in the year 1935 by John Ridley Stroop basically from the automatic process theory. This theory was concerned with how the processing activities would automatically propagate as a result long-term practice or involvement, at some point response towards such activities would be faster, with less attention and they are not easily avoided (Crank, 1973). According to Stroop the activities such as identifying the word and their associated color was also a form that relied on the automatic process. Therefore the stroop effect was actually a test that demonstrated the distinction or change in time of reaction towards naming the colors, reading the colors of the names and naming the colors of the words painted in different colors (Weiner, 2003). The key aim or objective of the stroop effect experiment was to identify the cognitive ability as well as the attention focus based on the memory and the learning. The cognitive ability for a given individual particularly for concentrating on a given stimuli in the surroundings while not paying any consideration to the others is a fundamental element of attention. The basis of the research around the Stroop effect was the fact that if interference can divert the attention of a given individual from a given stimuli then interference is effective and can impact the neural or cognitive components of discriminatory attention. Stroop used two theories to explain the basis of the Stroop effect; Speed of Processing Theory and the Selective Attention Theory. According to the Selective Attention Theory interference would normally take place since the process of naming colours calls for a great level of more attention than just reading these words. On the other hand the Speed of Processing Theory points out that interference can easily occur just because the process of reading words is faster than the step of naming the colours. Similarly would this diff erent dimension of stimuli have an impact on the reaction time or the response speed? These were some of the question that Stroop based on to carry out his research and the stroop effect experiments. The origin of the stroop effect experiments was the Schneider and Schifrin (1977); the controlled and automatic processing theory. According to the two, automatic form of control was faster than the controlled processing. Therefore if a given activity is automated it would tend to take place or happen with little or actually no conscious effort. On the other hand according to Sheibe, Shaver and Carrier (1967), it was an easier task to make an identification of a congruent word compared to the incongruent words. This was concurred with point of view in the investigation done by Stroop (1935). However much had been done and researched on the key relationship among these contradictory processes, but it was actually Stroop that brought in the element of combining the colours and words, thus Stroop effect. He considered the ability of people against reading colour names and naming the coloured words. Stroop (1935) made a reliable conclusion that there is an effect of interference that imp acted the participants especially on the time they took to complete the task (Weiner Craighead, 2010). As far as this experiment is concerned the analysis would be done using a hypothesis testing method of the T-testing approach to proof that actually according to Stroop (1935), there is an existing interference that impact the respond especially in incongruent situations compared to the congruent situation. This form of data analysis would require the experimenter to generate the mean and the standard deviation value related to the response time in milliseconds. In this analysis a 0.10 significant level was considered. The However from these values the T-test would then be done in order to commit reliable conclusion based on the formulated hypothesis. There are two forms of hypothesis considered in this analysis: the null hypothesis and the alternative hypothesis. The Null hypothesis states that, the reaction time for the congruent and the incongruent situation were the same ( µ1- µ2) = 0. The Alternative hypothesis states that, the reaction time for the incongruent situation was higher than the reaction time for the congruent situation ( µ1- µ2) ?0. The hypothesis in place is compost of a two-tailed test; therefore once the mean value is test to be either much bigger or smaller than the then the null hypothesis is rejected (Dodd, Michael, 2001). DESIGN The design that was considered in this experiment was a repeated measure with two variables in place; the stimuli A and stimuli B. The two stimuli all had a mixture of congruent and incongruent aspects. The stimuli A in this case had 30 congruent words will stimuli B had 30 incongruent, stimuli A could be considered as   the uncontrolled experiment where the names of the words bared their associated colour while the stimuli B was the controlled experiment where the names of the wordings was different from the colours they bared. Each of these stimuli had 30 variables and the two were presented for testing within 51 participants. The researcher or the experimenter was expected to carry out both the stimuli A and B on each participant where the response or reaction time at each stimulus for both the congruent and incongruent was recorded in seconds. PARTICIPANTS Fifty one undergraduate students from college willingly volunteered to take part and participated in this laboratory practical. All of those who participated in the experiment were situated within the same stimuli or environment and taken through the exercise by the experimenter. However the participants considered for these exercise were strictly over 18 years of age in mixed gender without any discriminations such as their nationality. As far as the statistics is concerned the average age of the fifty one participants was 36.56 years at a standard deviation of 9.30911. The youngest among the participants was 19 years of age while the eldest participant was 64 years.   However the time taken by each participant to respond or react to a given stimuli was keenly record and the participant were encourage to proceed in case they failed to respond at a given point. MATERIALS The apparatus that were used for the success of this experiment included a personal computer to run the stimuli and a projector to display the same to the participants. The response time of each participant was recorded using a stopwatch. However the collected data was analyzed using the SPSS software. PROCEDURE The participants were taken through the instruction before starting the laboratory process and test individually. Each of these participants was provided with the two list; stimuli A and B containing 30 stimuli each. The participants were requested to give a response to each and every stimulus as quickly as possible by specifying the colours of the words presented on the projector. The response time taken by each participant to react to both the congruent (Stimuli A) and incongruent (Stimuli B) were recorded. RESULTS The response or the reaction time in Milliseconds (Ms.) for each participant for both the congruent and incongruent situation was collected and some of the outcomes of the erroneous response were removed to make a reliable data for analysis. However the mean values and the standard deviation for each set of data were clearly analyzed and presented using the SPSS statistical tool. From the analysis the mean value obtain for congruent situation (Stimuli A) was 21.6157 and for the incongruent situation (Stimuli B) was 35.004. On the other hand the standard deviation value for congruent situation was 7.6833 and for the incongruent situation it was 9.04817. The table below shows the data analysis for the standard deviation and the mean values for both the congruent and incongruent experimental situations. The graphical representation for the above analysis is as shown below for both the mean and standard deviation values. ANALYSIS Computing the standard error (SE), the degree of freedom (DF) as well as the t-test value (t). SE = Sqrt[(S12/N1) + (S22/N2)] Where; S1=7.6833, S2=9.04817, N1=51 and   N2=51 SE = sqrt[(7.68332/51) + (9.048172/51] = sqrt(1.1575 + 1.6053) = sqrt(2.7627) = 1.66216 SE=1.66216 DF = (S12/N1 + S22/N2)2 / { [ (S12/ N1)2 / (N1 1) ] + [ (S22 / N2)2 / (N2 1) ] } DF = (7.68332/51 + 9.048172/51)2 / { [ (7.68332/ 51)2 / (51 1) ] + [ (9.048172 / 51)2 / (51 1) ] } DF = (1.1575+ 1.6052)2 / { [ (1.1575)2 / (51) ] + [ (1.6052)2 / (51) ] }= 7.6325/(0.02627+0.05052) DE=99.39 t = [( µ1  µ2) d] / SE = [ (21.6157 -35.3004 ) 0 ] / 1.66216 = -13.6847/1.66216 = -8.2330 For a two tailed test, the P-value would be the probability that a statistic of 99 degree of freedom exceeds -8.2330; greater or less than 8.2330 or -8.2330 respectively (Proctor, 1994). From the t-distribution calculator at P (t -8.2330) = 0.000, and P (t 8.2330) = 0.000. Thus, the P-value = 0.000 + 0.000= 0.000 Therefore since the P-value (0.000) is much less than the set significance level of 0.10 then the null hypothesis in this case is rejected. The alternative hypothesis is true; the reaction time for the incongruent situation was higher than the reaction time for the congruent situation (Cramer, 1967). CONCLUSION The alternative hypothesis was that the reaction time for the incongruent situation was higher than the reaction time for the congruent situation. The results from these experiment supports this hypothesis, since the time that one would take to respond to a incongruent situation was much longer compared to a congruent situation. Therefore according to the element of this experiment, interference can divert the attention of a given individual from a given stimuli by impacting the neural or cognitive components of discriminatory attention (Korbmacher, 2016). This goes hand in hand according to Stroop (1935) and Sheibe, Shaver and Carrier (1967), it is an easier task to make an identification of a congruent word compared to the incongruent words.

Thursday, December 19, 2019

The Spill Of The Bp Oil Spill - 1602 Words

The BP Oil Spill began on April 20, 2010 in the Gulf of Mexico after the BP leased, Transocean owned, Deepwater Horizon drilling rig exploded, killing 11 and injuring 17 of the 126 crew members. The explosion also sank the Deepwater Horizon drilling rig triggering a massive oil spill that would last for 87 days and leak 4.9 billion barrels of oil into the Gulf of Mexico. After the explosion, BP and the federal government enlisted the best minds in the country and worked tirelessly to come up with a solution to stop the leaks, but failed several times due to the extreme depth, pressure and technical complexity of using remotely operated vehicles (ROV). The final resolution of the leak was achieved on July 15, 2010, when a sealing cap was installed over top of the well. Although the stoppage of the leak took just under two months, the clean-up efforts continued for much longer as containment booms, chemical dispersants, skimming vessels and controlled burns were used to mitigate the en vironmental damage. The event was classified as the largest accidental marine oil spill in the world and the largest environmental disaster in United States history. The BP Oil Spill was a very complex situation with a wide range of stakeholders. After the explosion and discovery of leaking oil, the National Contingency Plan took effect establishing something called a â€Å"Unified Area Command† where local, state and federal authorities would join forces with the private sector to carry out theShow MoreRelatedThe Spill Of The Bp Oil Spill1464 Words   |  6 PagesThe BP oil spill was one of the worst oil spills to ever happen in the US. There are many factors that caused this horrible spill to happen; to be exact there were eight failures of the oilrig that caused this disaster. The first failure was the cement at the bottom of the borehole was not sealed properly. This caused the oil and gas to start leaking into the pipe leading to the surface of the rig. The second failu re was that the valve leading to the surface was sealed improperly with cement. InRead MoreBp Oil Spill822 Words   |  4 PagesBP OIL SPILL Under the Deepwater Horizon, an offshore drilling ring of British Petroleum (BP) caused an oil spill in the Gulf of Mexico. The incident occurred on April 20th 2010, where equipment failed and caused the explosion sinking the ring, and causing the death of 11 workers and more than 17 workers injured. The British based energy company also faced other problems at the site of the oil spill. More than 40 million gallons (estimated data) of oil spewed into the Gulf of Mexico. Oil spillRead MoreBp Oil Spill1317 Words   |  6 PagesFive Lessons from the BP Oil Spill Its very easy to pile onto BP right now. The accident, which may be due more to negligence, is bad enough. The company lost 11 employees — after losing 15 in a high-profile explosion at a refinery 5 years ago. The damage to the Gulf, its species, and the people who depend on it is almost incalculable. But surprisingly, its even easier to criticize BPs behaviour since the explosion — the company has tried hard to downplay the scale of the tragedy and it hasRead MoreBp Oil Spill1094 Words   |  5 Pagescompetitive) segments do you think BP considered or didn’t consider prior to their drilling of the Gulf Coast? What should the wedding business owners now consider in their external environment? BP decided to drill in the Gulf Coast mostly because of the oil availability and competition. Opportunity was definitely considered by BP. The North Sea was saturated with other oil companies and BP saw an opportunity in the Gulf of Mexico (Pour, 2011). The segment that BP did not do well is the environmentalRead MoreBp Oil Spill1883 Words   |  8 PagesGeography 29 February 2012 BP Oil Spill Oil rigs provide the world with the fuel that is needed to keep it running. However, it is common knowledge that they may potentially cause harm to not only living creatures but also the environment they rely on to survive. This was proven in the spring of 2010 when an oil rig off the Gulf of Mexico exploded and resulted in an oil spill. This catastrophic event opened millions of eyes to the errors that can be found in the way oil rigs are set up. It alsoRead MoreThe BP Oil Spill1950 Words   |  8 PagesOil covered everything: beaches, animals, plants, bottoms of boats. Approximately 205.8 million gallons of oil leaked into the ocean and toward the Louisiana shoreline. To put the amount in perspective, that oil could be used to drive a Toyota Prius around the earth 184,181 times (Repanich). All of this pollution and destruction because of one singular company: British Petroleum. Needless to say, the image of BP was tarnished because of this. What can a company do to come back from such a seriousRead MoreBp Oil Spill1198 Words   |  5 PagesBP Oil Spill Chait, J, (2010). Dear Leader. New Republic, 241(10), 2-2. Retrieve June 21, 2010, from Academic Search Premier. This article discusses the present oil spill in the Gulf of Mexico. The president’s has not changed the Minerals Management Service. In reality, the federal government has no agency tasked with capping undersea oil leaks. All the necessary equipment, along with the expertise for operating it, resides with the private sector. BP will likely bear the full cost of the spill;Read MoreBp and Oil Spill996 Words   |  4 PagesBP was the  ªrst of these companies to change from a reactive to a proactive climate strategy formulation. In 1996, it withdrew from the oppositional Global Climate Coalition (GCC), which was characterized as the most powerful lobby organization in climate policy.28 BP then accepted the climate change problem as diagnosed by the Intergovernmental Panel on Climate Change (IPCC) and gave its support to the Kyoto Protocol. In 1998 BP’s strategy formu- lation developed further in a proactive directionRead MoreBp Oil Spill Essay1507 Words   |  7 PagesBP oil spill is ranked as the largest environmental disaster in the world history. As the oil from BP spill washes ashore, people on Gulf Coast are suffering huge damages they have never met before. The U.S. government estimates that up to 60,000 barrels of oil a day are spewing out from the damaged BP drilling rig to Gulf of Mexico. It has ruined the shoreline, killed animal and sea life, threaten the ecosystem and harmed the tourism and fishing in Louisiana. After the spilling happened, US governmentRead MoreBp Oil Spill Globalization1062 Words   |  5 Pagesrelationships between countries and affected the world economies, be it the relationship with the board of directors of BP and the US government or the change in value of BP PLC on the stock exchange.  ²As a result of the oil spill the Obama administration imposed a six month moratorium on new deep water drilling operations which ended on the 12th of October. For twenty years previous to the oil spill in the Gulf of Mexico there had been a total ban on deep water off shore drilling. But during his presidency

Wednesday, December 11, 2019

Public Administration Budgeting and Human Resource

Question: Discuss about thePublic Administrationfor Budgeting and Human Resource. Answer: Introduction Public administration is a field of study that primarily focuses on organizational management and executive action. It also entails the practical implementation of policies and statues in an executive and mandated order (Lane, 2005). For this reason, public administrators are expected to offer government requisite programs and tender that exhibit value and service to the public. Public administration is characterized by six key elements including ethics, statistics, policy analysis, organizational theory, budgeting, and human resource. As such, it is evident that public administration encompasses a broad array of aspects making it a multidisciplinary element of management. Hierarchy of Southwest Airline Overview of the Company The Southwest Airline is a collective air transport organization that primarily deals in cargo and human transit within the United States with possible expansion being forecasted to other nations. The Southwest airline brags of being the largest low-cost carrier service provider in the world with an important market share in the flight sector. As a publicly traded company, Southwest Airline has a competent workforce. The Southwest airline deals with approximately 3,900 customers on a daily basis with peak travel days registering an even higher number. The franchise has fully tendered and traded in 98 destinations within the United States. It is worth noting that the Southwest airline has a wide operating base with an approximated fleet size of 713 aircraft. Hierarchy of Southwest Airline Hierarchy is defined as the structuring, arrangement and distribution of powers, responsibilities, and authority with the primary objective of enhancing efficient and coordinated flow of work in the organization (Robbins Coulter, 2015). The organization assumes a vertical structure in conducting its operations. For instance, authority streams from the top management to middle management and lastly, to lower management. The organization top level management is responsible for the organizational well being such as ensuring that employees are satisfied with their work and they are realizing the company strategic objectives. Despite the fact that the top management is responsible for making all decisions in the organization, they often seek approval from the junior staffs. The top management including the board of directors understands the importance of employee engagement in decisions that affects their employees. For this reason, they often seek their approval before implementing any decision to ensure that employees feel valued by the company. The figure below highlights the organization hierarchy. Figure 1: Southwest Airline hierarchy. Management System of Southwest Airline Management system is described as the institutionalized framework that encompasses all policies, mandates, duties and tasks to ensure that the strategic objectives of the organization are realized (Griffin, 2010). Notably, the Southwest airline has since inception organized its management system to maintain its competitiveness in the airline industry. As such, it uses an ISO certified systems that are characterized by various aspects of quality management, social accountability and information security management systems (ISMS). Below is a summarized chart of the management system cycle of the company. Figure 2: Southwest Airline Management System Role of Management It is important for organizations to appreciate and acknowledge the essential roles played by management on effective work process. Ideally, management forms the backlog of all organizational processes and hence there is a lot of essence accredited to it (Pfiffner Presthus, 2009). Some of the fundamental role that organization management performs includes planning the organizational flow of processes, organizing activities of the organization, staffing objectives, and decision-making process. These functions are crucial in sustaining the competitiveness of the company in the airline industry. Flow of Communication at Southwest Airline The communication cycle and flow are vital elements within an organization because they depict the level of effectiveness of management. The communication cycle is categorized into five including downward communication, upward communication, lateral communication, diagonal communication, and external communication (Boateng Nikoi, 2004). These communication strategies are used by various organizations to communicate with employees. Southwest Airline makes of upward communication style. According to Wilson (2010), upward communication is that which moves from lower levels to the highest level. As such, subordinates are actively involved in decision making and results in a loyal workforce. Southwest airline in its strategic plan cites the use of upward cycle to ensure that they have a loyal subordinate as well as ensure equitable relations. In conclusion, management plays an essential role in enhancing the competitiveness of an organization. It is their duty to plan and coordinate all the functions of the organization to ensure that things are running smoothly. Besides, communication is essential in engaging employees in issues affecting the organization. The use of upward communication strategy by Southwest Airline makes employees feel valued and respected. References Boateng, K., Nikoi, E. (2014). Collaborative Communication Processes and Decision Making in Organizations. Hershey, PA: IGI Global. Griffin, Ricky W. Fundamentals of Management: Core Concepts and Applications. Boston, Mass: Houghton Mifflin, 2007. Print. Lane, J. (2005). Public Administration Public Management : The Principal-Agent Perspective. London: Routledge. Pfiffner, J. M., Presthus, R. V. (2009). Public administration. New York: Ronald Press. Robbins, S. P., Coulter, M. K. (2015). Management.Upper Saddle River, NJ: Pearson Prentice Hall. Wilson, G. L., Goodall, H. L., Waagen, C. L. (2010).Organizational communication. New York: Harper Row.

Wednesday, December 4, 2019

Physics General Relativity and 19th Century Essay Example

Physics: General Relativity and 19th Century Essay Physics   is a  natural science  that involves the study of  matter  and its  motion  through  spacetime, as well as all related concepts, including  energy  and  force. More broadly, it is the general analysis of  nature, conducted in order to understand how the  universe  behaves. Physics is one of the oldest  academic disciplines, perhaps the oldest through its inclusion of  astronomy. Over the last two millennia, physics had been considered synonymous with  philosophy,  chemistry, and certain branches of  mathematics  and  biology, but during the  Scientific Revolution  in the 16th century, it emerged to become a unique modern science in its own right. However, in some subject areas such as in  mathematical physics  and  quantum chemistry, the boundaries of physics remain difficult to distinguish. Physics is both significant and influential, in part because advances in its understanding have often translated into new  technologies, but also because new ideas in physics often resonate with other sciences, mathematics, and philosophy. For example, advances in the understanding of  electromagnetism  or  nuclear physics  led directly to the development of new products which have dramatically transformed modern-day society, such as  television,  computers,  domestic appliances, and  nuclear weapons; advances inthermodynamics  led to the development of motorized transport; and advances in  mechanics  inspired the development of  calculus. SCOPE AND AIMS OF PHYSICS Physics covers a wide range of  phenomena, from  elementary particles  (such as quarks, neutrinos and electrons) to the largest  superclusters  of galaxies. We will write a custom essay sample on Physics: General Relativity and 19th Century specifically for you for only $16.38 $13.9/page Order now We will write a custom essay sample on Physics: General Relativity and 19th Century specifically for you FOR ONLY $16.38 $13.9/page Hire Writer We will write a custom essay sample on Physics: General Relativity and 19th Century specifically for you FOR ONLY $16.38 $13.9/page Hire Writer Included in these phenomena are the most basic objects from which all other things are composed, and therefore physics is sometimes called the fundamental science. [8]  Physics aims to describe the various phenomenon that occur in nature in terms of simpler phenomena. Thus, physics aims to both connect the things observable to humans to  root causes, and then to try to connect these causes together. For example, the  ancient Chinese  observed that certain rocks (lodestone) were attracted to one another by some invisible force. This effect was later called  magnetism, and was first rigorously studied in the 17th century. A little earlier than the Chinese, the  ancient Greeks  knew of other objects such as  amber, that when rubbed with fur would cause a similar invisible attraction between the two. This was also first studied rigorously in the 17th century, and came to be called  electricity. Thus, physics had come to understand two observations of nature in terms of some root cause (electricity and magnetism). However, further work in the 19th century revealed that these two forces were just two different aspects of one force –  electromagnetism. This process of unifying forces continues today, and electromagnetism and the  weak nuclear force  are now considered to be two aspects of the  electroweak interaction. Physics  is  closely  related to the other natural sciences and, in a sense, encompasses them. Chemistry, for example, deals with the interaction of atoms to form molecules; much of modern geology is largely a study of the physics of the earth and is known as geophysics; and astronomy deals with the physics of the stars and outer space. Even living systems are made up of fundamental particles and, as studied in biophysics and biochemistry, they follow the same types of laws as the simpler particles traditionally studied by a physicist. The  emphasis  on  the  interaction between particles in modern physics, known as the microscopic approach, must often be supplemented by a macroscopic approach that deals with larger elements or systems of particles. This macroscopic approach is indispensable to the application of physics to much of modern technology. Thermodynamics, for example, a branch of physics developed during the 19th century, deals with the elucidation and measurement of properties of a system as a whole and remains useful in other fields of physics; it also forms the basis of much of chemical and mechanical engineering. Such properties as the temperature, pressure, and volume of a gas have no meaning for an individual atom or molecule; these thermodynamic concepts can only be applied directly to a very large system of such particles. A bridge exists, however, between the microscopic and macroscopic approach; another branch of physics, known as statistical mechanics, indicates how pressure and temperature can be related to the motion of atoms and molecules on a statistical basis. Physics  emerged  as  a  separate science only in the early 19th century; until that time a physicist was often also a mathematician, philosopher, chemist, biologist, engineer, or even primarily a political leader or artist. Today the field has grown to such an extent that with few exceptions modern physicists have to limit their attention to one or two branches of the science. Once the fundamental aspects of a new field are discovered and understood, they become the domain of engineers and other applied scientists. The 19th-century discoveries in electricity and magnetism, for example, are now the province of electrical and communication engineers; the properties of matter discovered at the beginning of the 20th century have been applied in electronics; and the discoveries of nuclear physics, most of them not yet 40 years old, have passed into the hands of nuclear engineers for applications to peaceful or military uses. HISTORY OF PHYSICS Although  ideas  about  the physical world date from antiquity, physics did not emerge as a well-defined field of study until early in the 19th century. The  Babylonians,  Egyptians, and early Mesoamericans observed the motions of the planets and succeeded in predicting eclipses, but they failed to find an underlying system governing planetary motion. Little was added by the Greek civilization, partly because the uncritical acceptance of the ideas of the major philosophers Plato and Aristotle discouraged experimentation. Some  progress  was  made, however, notably in Alexandria, the scientific center of Greek civilization. There, the Greek mathematician and inventor Archimedes designed various practical mechanical devices, such as levers and screws, and measured the density of solid bodies by submerging them in a liquid. Other important Greek scientists were the astronomer Aristarchus of Samos, who measured the ratio of the distances from the earth to the sun and the moon; the mathematician, astronomer, and geographer Eratosthenes, who determined the circumference of the earth and drew up a catalog of stars; the astronomer Hipparchus, who discovered the precession of the equinoxes; and the astronomer, mathematician, and geographer Ptolemy, who proposed the system of planetary motion that was named after him, in which the earth was the center and the sun, moon, and stars moved around it in circular orbits. Little  advance  was  made in physics, or in any other science, during the Middle Ages, other than the preservation of the classical Greek treatises, for which the Arab scholars such as Averroes and Al-Quarashi, the latter also known as Ibn al-Nafis, deserve much credit. The founding of the great medieval universities by monastic orders in Europe, starting in the 13th century, generally failed to advance physics or any experimental investigations. The Italian Scholastic philosopher and theologian Saint Thomas Aquinas, for instance, attempted to demonstrate that the works of Plato and Aristotle were consistent with the Scriptures. The English Scholastic philosopher and scientist Roger Bacon was one of the few philosophers who advocated the experimental method as the true foundation of scientific knowledge and who also did some work in astronomy, chemistry, optics, and machine design. The  advent  of  modern  science followed the Renaissance and was ushered in by the highly successful attempt by four outstanding individuals to interpret the behavior of the heavenly bodies during the 16th and early 17th centuries. The Polish natural philosopher Nicolaus Copernicus propounded the heliocentric system that the planets move around the sun. He was convinced, however, that the planetary orbits were circular, and therefore his system required almost as many complicated elaborations as the Ptolemaic system it was intended to replace (see Copernican System). The Danish astronomer Tycho Brahe, believing in the Ptolemaic system, tried to confirm it by a series of remarkably accurate measurements. These provided his assistant, the German astronomer Johannes Kepler, with the data to overthrow the Ptolemaic system and led to the enunciation of three laws that conformed with a modified heliocentric theory. Galileo, having heard of the invention of the telescope, constructed one of his own and, starting in 1609, was able to confirm the heliocentric system by observing the phases of the planet Venus. He also discovered the surface irregularities of the moon, the four brightest satellites of Jupiter, sunspots, and many stars in the Milky Way. Galileos interests were not limited to astronomy; by using inclined planes and an improved water clock, he had earlier demonstrated that bodies of different weight fall at the same rate (thus overturning Aristotles dictums), and that their speed increases uniformly with the time of fall. Galileos astronomical discoveries and his work in mechanics foreshadowed the work of the 17th-century English mathematician and physicist Sir Isaac Newton, one of the greatest scientists who ever lived. NEWTON AND MECHANICS Starting  about  1665,  at the age of 23, Newton enunciated the principles of mechanics, formulated the law of universal gravitation, separated white light into colors, proposed a theory for the propagation of light, and invented differential and integral calculus. Newtons contributions covered an enormous range of natural phenomena: He was thus able to show that not only Keplers laws of planetary motion but also Galileos discoveries of falling bodies follow a combination of his own second law of motion and the law of gravitation, and to predict the appearance of comets, explain the effect of the moon in producing the tides, and explain the precession of the equinoxes. The  subsequent  development of physics owes much to Newtons laws of motion, notably the second, which states that the force needed to accelerate an object will be proportional to its mass times the acceleration. If the force and the initial position and velocity of a body are given, subsequent positions and velocities can be computed, although the force may vary with time or position; in the latter case, Newtons calculus must be applied. This simple law contained another important aspect: Each body has an inherent property, its inertial mass, which influences its motion. The greater this mass, the slower the change of velocity when a given force is impressed. Even today, the law retains its practical utility, as long as the body is not very small, not very massive, and not moving extremely rapidly. Newtons third law, expressed simply as â€Å"for every action there is an equal and opposite reaction,† recognizes, in more sophisticated modern terms, that all forces between particles come in oppositely directed pairs, although not necessarily along the line joining the particles. Gravity Newtons  more  specific contribution to the description of the forces in nature was the elucidation of the force of gravity. Today scientists know that in addition to gravity only three other fundamental forces give rise to all observed properties and activities in the universe: those of electromagnetism, the so-called strong nuclear interactions that bind together the neutrons and protons within atomic nuclei, and the weak interactions between some of the elementary particles that account for the phenomenon of radioactivity. Understanding of the force concept, however, dates from the universal law of gravitation, which recognizes that all material particles, and the bodies that are composed of them, have a property called gravitational mass. This property causes any two particles to exert attractive forces on each other (along the line joining them) that are directly proportional to the product of the masses, and inversely proportional to the square of the distance between the particles. This force of gravity governs the motion of the planets about the sun and the earths own gravitational field, and it may also be responsible for the possible gravitational collapse, the final stage in the life cycle of stars. See Black Hole; Gravitation; Star. One  of  the  most  important observations of physics is that the gravitational mass of a body (which is the source of one of the forces existing between it and another particle), is effectively the same as its inertial mass, the property that determines the motional response to any force exerted on it. This equivalence, now confirmed experimentally to within one part in 1013, holds in the sense of proportionality—that is, when one body has twice the gravitational mass of another, it also has twice the inertial mass. Thus, Galileos demonstrations, which antedate Newtons laws, that bodies fall to the ground with the same acceleration and hence with the same motion, can be explained by the fact that the gravitational mass of a body, which determines the forces exerted on it, and the inertial mass, which determines the response to that force, cancel out. The  full  significance of this equivalence between gravitational and inertial masses, however, was not appreciated until Albert Einstein, the theoretical physicist who enunciated the theory of relativity, saw that it led to a further implication: the inability to distinguish between a gravitational field and an accelerated frame of reference (see the Modern Physics: Relativity section of this article). The  force  of  gravity  is the weakest of the four forces of nature when elementary particles are considered. The gravitational force between two protons, for example, which are among the heaviest elementary particles, is at any given distance only 10-36 the magnitude of the electrostatic forces between them, and for two such protons in the nucleus of an atom, this force in turn is many times smaller than the strong nuclear interaction. The dominance of gravity on a macroscopic scale is due to two reasons: (1) Only one type of mass is known, which leads to only one kind of gravitational force, which is attractive. The many elementary particles that make up a large body, such as the earth, therefore exhibit an additive effect of their gravitational forces in line with the addition of their masses, which thus become very large. (2) The gravitational forces act over a large range, and decrease only as the square of the distance between two bodies. By  contrast,  the  electric charges of elementary particles, which give rise to electrostatic and magnetic forces, are either positive or negative, or absent altogether. Only particles with opposite charges attract one another, and large composite bodies therefore tend to be electrically neutral and inactive. On the other hand, the nuclear forces, both strong and weak, are extremely short range and become hardly noticeable at distances of the order of 1 million-millionth of an inch. Despite  its  macroscopic importance, the force of gravity remains so weak that a body must be very massive before its influence is noticed by another. Thus, the law of universal gravitation was deduced from observations of the motions of the planets long before it could be checked experimentally. Not until 1771 did the British physicist and chemist Henry Cavendish confirm it by using large spheres of lead to attract small masses attached to a torsion pendulum, and from these measurements also deduced the density of the earth. In  the  two  centuries  after Newton, although mechanics was analyzed, reformulated, and applied to complex systems, no new physical ideas were added. The Swiss mathematician Leonhard Euler first formulated the equations of motion for rigid bodies, while Newton had dealt only with masses concentrated at a point, which thus acted like particles. Various mathematical physicists, among them Joseph Louis Lagrange of France and Sir William Rowan Hamilton of Ireland extended Newtons second law in more sophisticated and elegant reformulations. Over the same period, Euler, the Dutch-born scientist Daniel Bernoulli, and other scientists also extended Newtonian mechanics to lay the foundation of fluid mechanics. Electricity and Magnetism Although  the  ancient  Greeks were aware of the electrostatic properties of amber, and the Chinese as early as 2700 bc made crude magnets from lodestone, experimentation with and the understanding and use of electric and magnetic phenomena did not occur until the end of the 18th century. In 1785 the French physicist Charles Augustin de Coulomb first confirmed experimentally that electrical charges attract or repel one another according to an inverse square law, similar to that of gravitation. A powerful theory to calculate the effect of any number of static electric charges arbitrarily distributed was subsequently developed by the French mathematician Simeon-Denis Poisson and the German mathematician Carl Friedrich Gauss. A  positively  charged  particle attracts a negatively charged particle, tending to accelerate one toward the other. If the medium through which the particle moves offers resistance to that motion, this may be reduced to a constant-velocity (rather than accelerated) motion, and the medium will be heated up and may also be otherwise affected. The ability to maintain an electromotive force that could continue to drive electrically charged particles had to await the development of the chemical battery by the Italian physicist Alessandro Volta in 1800. The classical theory of a simple electric circuit assumes that the two terminals of a battery are maintained positively and negatively charged as a result of its internal properties. When the terminals are connected by a wire, negatively charged particles will be simultaneously pushed away from the negative terminal and attracted to the positive one, and in the process heat up the wire that offers resistance to the motion. Upon their arrival at the positive terminal, the battery will force the particles toward the negative terminal, overcoming the opposing forces of Coulombs law. The German physicist Georg Simon Ohm first discovered the existence of a simple proportionality constant between the current flowing and the electromotive force supplied by a battery, known as the resistance of the circuit. Ohms law, which states that the resistance is equal to the electromotive force, or voltage, divided by the current, is not a fundamental and universally applicable law of physics, but rather describes the behavior of a limited class of solid materials. The  historical  concepts of magnetism, based on the existence of pairs of oppositely charged poles, had started in the 17th century and owe much to the work of Coulomb. The first connection between magnetism and electricity, however, was made through the pioneering experiments of the Danish physicist and chemist Hans Christian Oersted, who in 1819 discovered that a magnetic needle could be deflected by a wire nearby carrying an electric current. Within one week after learning of Oersteds discovery, the French scientist Andre Marie Ampere showed experimentally that two current-carrying wires would affect each other like poles of magnets. In 1831 the British physicist and chemist Michael Faraday discovered that an electric current could be induced (made to flow) in a wire without connection to a battery, either by moving a magnet or by placing another current-carrying wire with an unsteady—that is, rising and falling—current nearby. The intimate connection between electricity and magnetism, now established, can best be stated in terms of electric or magnetic fields, or forces that will act at a particular point on a unit charge or unit current, respectively, placed at that point. Stationary electric charges produce electric fields; currents—that is, moving electric charges—produce magnetic fields. Electric fields are also produced by changing magnetic fields, and vice versa. Electric fields exert forces on charged particles as a function of their charge alone; magnetic fields will exert an additional force only if the charges are in motion. These  qualitative  findings were finally put into a precise mathematical form by the British physicist James Clerk Maxwell who, in developing the partial differential equations that bear his name, related the space and time changes of electric and magnetic fields at a point with the charge and current densities at that point. In principle, they permit the calculation of the fields everywhere and any time from a knowledge of the charges and currents. An unexpected result arising from the solution of these equations was the prediction of a new kind of electromagnetic field, one that was produced by accelerating charges, that was propagated through space with the speed of light in the form of an electromagnetic wave, and that decreased with the inverse square of the distance from the source. In 1887 the German physicist Heinrich Rudolf Hertz succeeded in actually generating such waves by electrical means, thereby laying the foundations for radio, radar, television, and other forms of telecommunications. See Electromagnetic Radiation. The  behavior  of  electric and magnetic fields in these waves is quite similar to that of a very long taut string, one end of which is rapidly moved up and down in a periodic fashion. Any point along the string will be observed to move up and down, or oscillate, with the same period or with the same frequency as the source. Points along the string at different distances from the source will reach the maximum vertical displacements at different times, or at a different phase. Each point along he string will do what its neighbor did, but a little later, if it is further removed from the vibrating source. The speed with which the disturbance, or the message to oscillate, is transmitted along the string is called the wave velocity. This is a function of the medium, its mass, and the tension in the case of a string. An instantaneous snapshot of the string (after it has been in motion for a while) would show equispaced points having the same displaceme nt and motion, separated by a distance known as the wavelength, which is equal to the wave velocity divided by the frequency. In the case of the electromagnetic field one can think of the electric-field strength as taking the place of the up-and-down motion of each piece of the string, with the magnetic field acting similarly at a direction at right angles to that of the electric field. The electromagnetic-wave velocity away from the source is the speed of light. The  apparent  linear  propagation of light was known since antiquity, and the ancient Greeks believed that light consisted of a stream of corpuscles. They were, however, quite confused as to whether these corpuscles originated in the eye or in the object viewed. Any satisfactory theory of light must explain its origin and disappearance and its changes in speed and direction while it passes through various media. Partial answers to these questions were proposed in the 17th century by Newton, who based them on the assumptions of a corpuscular theory, and by the English scientist Robert Hooke and the Dutch astronomer, mathematician, and physicist Christiaan Huygens, who proposed a wave theory. No experiment could be performed that distinguished between the two theories until the demonstration of interference in the early 19th century by the British physicist and physician Thomas Young. The French physicist Augustin Jean Fresnel decisively favored the wave theory. Interference  can  be  demonstrated by placing a thin slit in front of a light source, stationing a double slit farther away, and looking at a screen spaced some distance behind the double slit. Instead of showing a uniformly illuminated image of the slits, the screen will show equispaced light and dark bands. Particles coming from the same source and arriving at the screen via the two slits could not produce different light intensities at different points and could certainly not cancel each other to yield dark spots. Light waves, however, can produce such an effect. Assuming, as did Huygens, that each of the double slits acts as a new source, emitting light in all directions, the two wave trains arriving at the screen at the same point will not generally arrive in phase, though they will have left the two slits in phase. Depending on the difference in their paths, â€Å"positive† displacements arriving at the same time as â€Å"negative† displacements of the other will tend to cancel out and produce darkness, while the simultaneous arrival of either positive or negative displacements from both sources will lead to reinforcement or brightness. Each apparent bright spot undergoes a timewise variation as successive in-phase waves go from maximum positive through zero to maximum negative displacement and back. Neither the eye nor any classical instrument, however, can determine this rapid â€Å"flicker,† which in the visible-light range has a frequency from 4 ? 014 to 7. 5 ? 1014 Hz, or cycles per second. Although it cannot be measured directly, the frequency can be inferred from wavelength and velocity measurements. The wavelength can be determined from a simple measurement of the distance between the two slits, and the distance between adjacent bright bands on the screen; it ranges from 4 ? 10-5 cm (1. 6 ? 10-5 in) for violet light to 7. 5 ? 10-5 cm (3 ? 10-5 in) for red light with intermediate wavelengths for the other colors. The  first  measurement of the velocity of light was carried out by the Danish astronomer Olaus Roemer in 1676. He noted an apparent time variation between successive eclipses of Jupiters moons, which he ascribed to the intervening change in the distance between Earth and Jupiter, and to the corresponding difference in the time required for the light to reach the earth. His measurement was in fair agreement with the improved 19th-century observations of the French physicist Armand Hippolyte Louis Fizeau, and with the work of the American physicist Albert Abraham Michelson and his coworkers, which extended into the 20th century. Today the velocity of light is known very accurately as 299,292. 6 km (185,971. 8 mi sec) in vacuum. In matter, the velocity is less and varies with frequency, giving rise to a phenomenon known as dispersion. Maxwells  work  contributed several important results to the understanding of light by showing that it was electromagnetic in origin and that electric and magnetic fields oscillated in a light wave. His work predicted the existence of nonvisible light, and today electromagnetic waves or radiations are known to cover the spectrum from amma rays, with wavelengths of 10-12 cm (4 ? 10-11 in), through X rays, visible light, microwaves, and radio waves, to long waves of hundreds of kilometers in length. It also related the velocity of light in vacuum and through media to other observed properties of space and matter on which electrical and magnetic effects depend. Maxwells discoveries, however, did not provide any insight into the mysterious medium, corresponding to the string, th rough which light and electromagnetic waves had to travel. Based on the experience with water, sound, and elastic waves, scientists assumed a similar medium to exist, a â€Å"luminiferous ether† without mass, which was all-pervasive (because light could obviously travel through a massless vacuum), and had to act like a solid (because electromagnetic waves were known to be transverse and the oscillations took place in a plane perpendicular to the direction of propagation, and gases and liquids could only sustain longitudinal waves, such as sound waves). The search for this mysterious ether occupied physicists attention for much of the last part of the 19th century. The  problem  was  further compounded by an extension of a simple problem. A person walking forward with a speed of 3. 2 km/h (2 mph) in a train traveling at 64. 4 km/h (40 mph) appears to move at 67. 6 km/h (42 mph), to an observer on the ground. In terms of the velocity of light the question that now arose was: If light travels at about 300,000 km/sec (about 186,000 mi/sec) through the ether, at what velocity should it travel relative to an observer on earth while the earth also moves through the ether? Or, alternately, what is the earths velocity through the ether? The famous Michelson-Morley experiment, first performed in 1887 by Michelson and the American chemist Edward Williams Morley using an interferometer, was an attempt to measure this velocity; if the earth were traveling through a stationary ether, a difference should be apparent in the time taken by light to traverse a given distance, depending on whether it travels in the direction of or perpendicular to the earths motion. The experiment was sensitive enough to detect even a very slight difference by interference; the results were negative. Physics was now in a profound quandary from which it was not rescued until Einstein formulated his theory of relativity in 1905. Thermodynamics A  branch  of  physics  that assumed major stature during the 19th century was thermodynamics. It began by disentangling the previously confused concepts of heat and temperature, by arriving at meaningful definitions, and by showing how they could be related to the heretofore purely mechanical concepts of work and energy. Heat And Temperature A  different  sensation is experienced when a hot or a cold body is touched, leading to the qualitative and subjective concept of temperature. The addition of heat to a body leads to an increase in temperature (as long as no melting or boiling occurs), and in the case of two bodies at different temperatures brought into contact, heat flows from one to the other until their temperatures become the same and thermal equilibrium is reached. To arrive at a scientific measure of temperature, scientists used the observation that the addition or subtraction of heat produced a change in at least one well-defined property of a body. The addition of heat, for example, to a column of liquid maintained at constant pressure increased the length of the column, while the heating of a gas confined in a container raised its pressure. Temperature, therefore, can invariably be measured by one other physical property, as in the length of the mercury column in an ordinary thermometer, provided the other relevant properties remain unchanged. The mathematical relationship between the relevant physical properties of a body or system and its temperature is known as the equation of state. Thus, for an ideal gas, a simple relationship exists between the pressure, p, volume V, number of moles n, and the absolute temperature T, given by pV = nRT, where R is the same constant for all ideal gases. Boyles law, named after the British physicist and chemist Robert Boyle, and Gay-Lussacs law or Charless law, named after the French physicists and chemists Joseph Louis Gay-Lussac and Jacques Alexandre Cesar Charles, are both contained in this equation of state. Until  well  into  the  19th century, heat was considered a massless fluid called caloric, contained in matter and capable of being squeezed out of or into it. Although the so-called caloric theory answered most early questions on thermometry and calorimetry, it failed to provide a sound explanation of many early 19th-century observations. The first true connection between heat and other forms of energy was observed in 1798 by the Anglo-American physicist and statesman Benjamin Thompson who noted that the heat produced in the boring of cannon was roughly proportional to the amount of work done. In mechanics, work is the product of a force on a body and the distance through which the body moves during its application. The First Law of Thermodynamics The  equivalence  of  heat and work was explained by the German physicist Hermann Ludwig Ferdinand von Helmholtz and the British mathematician and physicist William Thomson, 1st Baron Kelvin, by the middle of the 19th century. Equivalence means that doing work on a system can produce exactly the same effect as adding heat; thus the same temperature rise can be achieved in a gas contained in a vessel by adding heat or by doing an appropriate amount of work through a paddle wheel sticking into the container w