Edudorm Facebook

Questions and Topics We Can Help You To Answer:
Paper Instructions:

Correlation and Linear Regression

Please answer one of the two following questions:

1. Correlation: Correlation Does Not Mean Causation

One of the major misconceptions about correlation is that a relationship between two variables means causation; that is, one variable causes changes in the other variable. There is a particular tendency to make this causal error, when the two variables seem to be related to each other.

What is one instance where you have seen correlation misinterpreted as causation?  Please describe.

OR

2. Linear Regression

Linear regression is used to predict the value of one variable from another variable. Since it is based on correlation, it cannot provide causation. In addition, the strength of the relationship between the two variables affects the ability to predict one variable from the other variable; that is, the stronger the relationship between the two variables, the better the ability to do prediction.

What is one instance where you think linear regression would be useful to you in your workplace or chosen major? Please describe including why and how it would be used.

188 Words  1 Pages

Questions and Topics We Can Help You To Answer:
Paper Instructions:

You must first pick a topic -- a relationship -- that you wish to examine using statistical analysis. You choose your own topic, which may span from comparing international economies and markets (IBM students), looking at the national development, global and local industries, or even individual firms. Please discuss your topic with your seminar tutor--they can help to refine your ideas and/or match your ideas to publicly available data.

You will then need to collect some data which allows you to analysis your relationship. You must construct a dataset with at least 50 observations (so as to obtain meaningful results). The data you use must be publicly available. You might get data from sources such as the Office for National Statistics (the UK), an international body such as the UN or the World Bank, industry or firm level from FAME or from countless other sources.

You must then write a report of 2,000 words or fewer which presents your statistical analysis of your topic and, crucially, carefully interprets your results. Include your dataset in a table as an appendix. This appendix does not count towards the word limit.

200 Words  1 Pages

Questions and Topics We Can Help You To Answer:
Paper Instructions:

The paper should be 8 – 10 pages in length with at least 10 sources. The paper should cover the development of the topic though history. This should include how historical events impacted this development as well as advancements in other fields. There will be a biographical component with the mathematicians who made the contributions, but not much. Do not put an abstract on the paper. Every reference must have a citation and every citation must have a reference. These are 8-10 pages of written work, not 7 or 7.5 pages. You will have the cover page, 8-10 pages, reference page/pages, and the writing center receipt. Watch the references. Cite what is not common knowledge. THIS IS AN APA PAPER NO .COM CITATIONS USED

136 Words  1 Pages

Questions and Topics We Can Help You To Answer:
Paper Instructions:

Find a recent journal article (peer reviewed scientific publication) in a field of interest to you which contains references to p-values. 

How large is the p-value? Do you think the results of this study are strong? How might they be stronger? Is there anything the authors should have included?

60 Words  1 Pages

Questions and Topics We Can Help You To Answer:
Paper Instructions:


W4 Assignment 1 Discussion
Discussion Question 3
Regression is one of the most widely used statistical techniques. In business, it is often used within the realm of what is known as predictive analytics—estimating the use of statistical techniques to estimate the likelihood or magnitude of future outcomes of interest. However, regression is also an explanatory tool because the regression model explains the variability in dependent variables with the help of one or more independent variables. 
From the marketing management perspective, discuss regression as an explanatory and a predictive tool. What are the major differences between an explanatory and a predictive tool? In terms of the goodness of fit, what is the most important indicator of regression as an explanatory tool and as a predictive tool?
Justify your answers with examples and reasoning.
Discussion Question 5
As a human resources manager, you have been assigned the task of determining how to curtail the high employee turnover rate in your company. You decide to use regression analysis to identify specific factors that have a measurable impact on turnover. Specifically, you are interested in assessing the impact of employee demographics (age, gender, education, etc.) and job-related factors (job type, income, advancement history, length of employment, etc.). However, given the similarity of some of these factors (e.g., education and income or age and length of employment), you are concerned with possible collinearity. How would you determine if collinearity is present in your model? If it is, what would you do to remedy it?
Justify your answers with examples and reasoning.

W5 Assignment 1 Discussion
Discussion Question 2
As a marketing analyst, you are responsible for estimating the level of sales associated with different marketing mix allocation scenarios. You have historical sales data, as well as promotional response data, for each of the elements of the marketing mix. State the differences between the forecasting methods that can be used. Which one would you use and why? If you make any assumptions, state them explicitly.
Justify your answers with examples and reasoning.
Discussion Question 4
You are a brand manager of a toy company, and one of your concerns is the correct timing and allocation of your promotional activities. For the past three years, you have steadily increased the share of the overall budget allocated to the promotional support of your brand's sales. On analyzing the sales for the past three years, you see a lot of variability. You are unable to predict whether your promotional spending increase has resulted in a corresponding increase in sales. What factors would you change to estimate the long-term sales trend?
Justify your answers with examples and reasoning.



W6 Assignment 1 Discussion
Discussion Question 3
In the past several weeks, you have been introduced to a range of statistical data analysis tools. Consider what you have learned in the context of progression of data, information, and knowledge. What are the specific techniques you would consider most helpful in transforming information into knowledge (as opposed to just translating data into information)?
Justify your answer with examples and reasoning.

Discussion Question 4
Experience teaches us that the bulk of the technical material covered in this course will, unfortunately, be forgotten shortly thereafter. If you were to commit just three concepts you have learned in this course to your long-term memory, which concepts would you select and why?
Justify your answer with examples and reasoning.

576 Words  2 Pages

Questions and Topics We Can Help You To Answer:
Paper Instructions:

Composing and Using Regular Expressions

Regular expressions became popular with the introduction of the UNIX operating system in 1960s and its text processing tools such as grep, and ed.

Write a two to three (2-3) page paper in which you:

    Define regular expressions and explain their purpose.
    Provide at least three (3) examples which demonstrate the way regular expressions work. 
    Suggest four (4) situations where you would use regular expression and explain the benefits of using regular expressions in such situations.
    Examine the shortcomings of regular expressions and describe at least two (2) situations where using them might be inappropriate.
    Use at least three (3) quality resources in this assignment. Note: Wikipedia and similar Websites do not qualify as quality resources. 

Your assignment must follow these formatting requirements:

    Be typed, double spaced, using Times New Roman font (size 12), with one-inch margins on all sides; citations and references must follow APA or school-specific format. Check with your professor for any additional instructions.
    Include a cover page containing the title of the assignment, the student’s name, the professor’s name, the course title, and the date. The cover page and the reference page are not included in the required assignment page length.

The specific course learning outcomes associated with this assignment are:

    Identify and create simple regular expressions. 
    Use technology and information resources to research issues in operating systems.
    Write clearly and concisely about UNIX / Linux topics using proper writing mechanics and technical style conventions.

255 Words  1 Pages

Questions and Topics We Can Help You To Answer:
Paper Instructions:

Examine the options available that enable researchers to clearly report their results, such as graphic representations, tables, mathematical representations, and others. Choose three tools to assess. What interests you most about these tools? What are the advantages and disadvantages of each tool? Describe how the tool would assist you as a potential researcher in conducting statistical procedures. What tools do you feel comfortable with? What tools might be intimidating for you? Review the Shapiro Library database to access journal articles for this assignment.

94 Words  1 Pages

 

Importance of Statistical forecasting

Statistical forecasting plays a major role in decision analysis especially in the quantitative healthcare decision analysis process as it helps to anticipate changes that are likely to occur in the future as well as how similar issues were resolved in the past. A major importance of statistical forecasting is that it acts as the basis of planning. Through it, healthcare institutions can generate a planning process to determine what actions are deemed necessary in specific situations given the right conditions. Since management is unable to see into the future, forecasting creates information that makes it easier to anticipate possible outcomes and engage in effective planning.
            Forecasting further plays the role of promoting organizations if well implemented. Since the activities performed within an organization are designed in such a way to allow accomplishment of set objectives, forecasting ensures that the appropriate time and effort is spent on specific activities to ensure they yield the expected outcomes (Sharma, 2020). Through statistical forecasting, the healthcare industry can collect data about likely outcomes in the future and then position itself to benefit from them, thus promoting the organization.

            Although statistical forecasting has its benefits, its success is greatly determined by how well it is implemented and this can be achieved following a set of steps.

  • Developing the basis

The healthcare organization must start by conducting a systematic investigation to collect information regarding the health industry, the products present, the state of the economy and other factors that could affect the success of the healthcare institution (Abraham & Ledolter, 2019). The information can then be used to determine what operations will have the desired outcomes.

  • Estimation of future operations

The information collected through the systematic investigation conducted is then passed on to the management. The information is of great significance as it offers insight on the state of the industry and the economy, all of which can be used by the management to come up with quantitative estimates of what the future scale of business operations for the healthcare institution might look like (Athanasopoulos & Hyndman, 2018). The data collected, together with the planning premise helps in making forecasts of what operations are likely to achieve the expected results.

  • Regulation of forecasts

Here, the manager uses the information available regarding actual operations and compares them against the forecast created from the data collected (Athanasopoulos & Hyndman, 2018). This process is crucial as it helps to identify any operations that may deviate from the forecast, the reason behind the deviation, and what needs to be done in order to resolve it.

  • Review of forecasting process

Lastly, the management examines the various procedures used and engages in activities aimed at making improvements through forecasting.

 

Success also relies heavily on the techniques of forecasting used. Although research is yet to identify a universally applicable method of forecasting, there are various methods that have promising results on different organizations. For the healthcare industry, health institutions can opt for a single method of forecasting or combine various methods and use them alongside each other.

  • Historical analogy

The method relies on information about analogous conditions that took place in the past. a new medical facility of organization for instance can collect information about a similar, but more advanced organization in the same field (Tan & Sheps, 2019). The developed institution’s history will offer information on the challenges and opportunities present during its early stages as well as steps taken to reach the position it is now.

Other than assessing other organization, the historical analogy approach also focuses on the organization history itself. Information on customer complaints, proposed innovations and other data can help to look back over historic events and determine an appropriate course of action to take in future.

  • Survey Method

Surveys can be used to collect information on past information, suggestions from employees, customer preferences and other information that is needed for statistical forecasting. The method is ideal in that it does not only rely on past information but also current occurrences as well as likely future outcomes. Another advantage is that, through surveys, the information collected is not only as a result of what is anticipated to occur in future, but also what customers want (Athanasopoulos & Hyndman, 2018). For a healthcare institution, surveys can give insight on the challenges that exist in the institution as well as what the caregivers and patients think would be a possible solution. The information thus makes it easier to come up with forecasts that are designed for the specific organization and not all health institutions in general.

Surveys can be conducted to gather information on the intentions of the concerned people. For example, information may be collected through surveys about the probable expenditure of consumers on various items. Both quantitative and qualitative information may be collected by this method.

  • Opinion Poll

The approach often involves a panel of professional discussing a certain topic. Their input is considered relevant as, being professional, they have the information and the experience to make credible conclusions. The panel can also discuss opinions from other professionals and this helps to do away with inaccurate information (Sharma, 2020). In a medical setting, opinion polls can help to enhance the quality of service by determining what forms of technology to incorporate, how to go about entertaining patients in waiting lobbies and also how to improve overall patient satisfaction.

 

 

 

 

 

 

 

 

 

 

 

 

 

References

Abraham, B., & Ledolter, J. (2019). Statistical methods for forecasting. New York: Wiley.

Hyndman, R. J., & Athanasopoulos, G. (2018). Forecasting: Principles and practice ; [a comprehensive indtroduction to the latest forecasting methods using R ; learn to    improve your forecast accuracy using dozenss of real data examples.]. Lexington, Ky

Sharma, P (2020) “Forecasting: Roles, steps and techniques in the management function”             retrieved from,   http://www.yourarticlelibrary.com/management/forecasting/forecasting-roles-steps-           and-techniques-management-function/70032

Tan, J. K. H., & Sheps, S. B. (2019). Health decision support systems. Gaithersburg, Md: Aspen Publishers.

 

 

979 Words  3 Pages

 

THE ADVANTAGES AND DISADVANTAGES OF OFFICIAL STATISTICS

 

Introduction

Police official reports and records of key life events are the most common and frequently used sources of official statistics. In most scenarios an increase or decrease in crime in communities, cities and countries is measured by official crime statistics obtained from data on crime that has been reported to the criminal justice system. Crimes that are not reported nor detected by the criminal justice system and law enforcement agencies are referred to as dark figures of crime. Crime official statistics is important in describing crime, finding reasons behind why certain crimes occur and evaluating policies and procedures that have been established to curb crime.  It is important that crime statistics are availed to facilitate the gauging of criminal activities that can influence the wellbeing of the community.

Official statistics saves researchers a lot of time since they readily availed at no or at a minimum cost on online libraries and world wide websites. Generally, they are easy to access and navigate and can be accessed remotely from home. Given the scarcity of resources it would be unwise to disregard a cheap and readily available source of data. Official statistics cover the entire population of a geographical location and in most cases, they contain accurate data that is collected through various qualitative methods; therefore, the credibility of the data cannot be doubted (Weatherburn, 2011, P 5).   Since this method does not require the analyzation of live subjects the researches presence cannot cause any reactivity or interference that can tamper with the results. Also, official statistics allow historical comparison since the data is collected over time and goes a long way back.

Administrative statistics are favored by positivists since they allow researchers to spot trends, figure out correlation and make generalization. Official statistics also makes conducting research easier since they allow the research to remain detached, therefore, there is little room on no room for interference. Official statistics are collected in the national interest; therefore, they are not biased in any way. They also track the progress of public institutions to include the criminal justice system (Hayes et al., 2014, P 36). Official statistics from police data is considered voluminous which makes investigation into crime more detailed and allows the mapping of crimes by specific such as street names. Official statistics from crime victim survey have a way of measuring reported and non-reported crimes, in comparison to police reports crime victim surveys gives a more accurate picture on the prevalence of crime.

            One of the greatest disadvantages of official statistics on crime is that they are prone to misuse by the media, selective reporting of official statistics on crime is one of the ways that these data are misused, selective reporting occurs in two ways. The first way mainly involves selecting a period of time when the rate of crime was low and comparing the period with another when the rate of crime was high (Weatherburn, 2011, P 10). This form of selective reporting of official statistics misleads the public on the prevalence of crime at certain time. The second form of selective reporting involves the selective reporting and use of facts, an instance of selective reporting took place is Sydney where the criminal justice system informed journalists that the number of  offenders under the age of ten had reduced from 130% in 2005 to 94% in 2007, despite this information journalists went further and published an article titled “ Kid Crime Rampage”, the article mentioned that the police had recorded a total of 7724 offences committed by children under 10 between 2005 and 2007 but did not mention that the percentage had fallen remarkable in the years that followed (Weatherburn, 2011, P 11). Miss representation of facts occurs when the media completely gets the facts wrong, often administrative statistics presented on crime are prone to being misunderstood by journalists (Weatherburn, 2011, P 11). Administrative statistics are also prone to being abused by politicians and the police, just like journalist’s, politicians and police engage in the selective use of data and issue misleading commentaries, it is not uncommon to hear police downplay an increase in the number of the recorded domestic offences as nothing more than the willingness of victims to report a crime.

The greatest weakness of official statistics collected from police data is that not all the crimes that are reported to the police are recorded. This makes the recorded crime a poor guide to the true prevalence of crime in various geographical locations (Weatherburn, 2011, P 12). Official statistics collected from crime victim surveys lack to provide information on victimless crimes such as illegal drug abuse. This data also fails to provide information about serious crimes that are rarely conducted such as the crime of extortion. In comparison to police reports they compare poorly in providing detailed data on the circumstances surrounding a particular crime.   

            The dark figures of crime are the undiscovered crimes that are not reported to the police or the criminal justice system, the fact that these crimes are underreported means that they are not included in official statistics. To uncover these crimes methods to include, self-reporting surveys, victimization surveys, enforcement pyramid and geospatial analysis are utilized (Coleman et al., 1999). The method of self-reporting surveys started being used in the 1940s to uncover dark figures of crime. This method involves questioning various individuals on their engagement in crime and other law-breaking acts, the answers to these questions are collected through questionnaire and interviews. These surveys are designed in a way that they create a safe and free environment to ensure that the results obtained from the survey are accurate. History has provided evidence on the effectiveness of self-surveys when used to uncover the dark figured of crime. Murphy was responsible for carrying a study for five years on American adolescent males to illustrate the effectiveness of self-survey. Among the 6416 reports obtained from the study half of these reports ended up being include in official statistics (Coleman et al., 1999).

Portfield carried out a similar research and proved the effectiveness of this method; his survey uncovered the hidden crimes of a pastor as well as shed light on an unreported murder. In the late 1980s James Short and Ivan Nye formulated reliability checks, which maximized the ability of self-reported surveys to uncover the dark figures of crime. The improvement of these two included the introduction of social desirability variables. Despite self-surveys proving to be useful in uncovering crimes it has methodological issues hindering its efficiency (Coleman et al., 1999).  Trivial crimes are not likely to produce a credible response, this leads to the production of skewed results that make official figure inaccurate.

Victimization surveys are another method that can be used to uncover hidden crimes, this method works by sampling a certain population on the crimes that are commented against them (Coleman et al., 1999). The subjects of these surveys are victims of crimes, this method is used to detect crimes that are not recorded in official statistics. When victimization surveys are carried out subsequently at an interval, they expose unreported crimes (Coleman et al., 1999). Victimization surveys are carried out at the local, national and international level. On the local level these surveys are small and are geographically focused on a certain small vulnerable group or society. The British Crime Survey is an example of a national survey, these kinds of surveys examine a larger geographical area with a focus on gaining the overall results on the extent of crimes.  Victimization surveys produce remarkable results when used to uncover the dark figures of crime.  The British Crime Survey is an example of a victimization survey, this survey uncovered the disparity that was found between reported and unreported crimes. In the period between 1991 to 1993, administrative statistics record a 7% increase in crime, the results from the victimization survey conducted by the British Crime Survey juxtaposed these results by stating that the rate of crime had increased by 18% during this period (Coleman et al., 1999). The BCS has provided cocreate proof that victimization surveys are an effective method that can be utilized to uncover the dark figures of crime. One of the limitations of this method is that it only includes crimes with identifiable victims’ and cannot be used to solve hidden crimes on issues such as environmental degradation.

Geospatial analysis has been used to uncover dark figures of crime in the last two decades. This method involves uncovering crimes via the use of location and comparison of incidences by their spatial distribution (Yang, B., 2019, P,4889). Generally, this method seeks to identify areas where unreported crimes are concentrated, identification of such hotspots is done by plotting incidences of crime on a map. This method of uncovering dark figures of crime originated from New York City following the development of CompStat.  The data obtained from summaries developed by the NYPD on crime is entered into the city’s database. This method has also proved to be successful in uncovering the dark figures of crime since once accurate data is collected officers are able to identify “crime hotspots”. The most concerning shortcoming of this method is that it discourages officers from filling official crime reports in order to portray a false reduction in crime rate in communities.

            The enforcement pyramid focusses of uncovering the dark figures of crime committed by white collar offenders to include, Directors, Managers and individuals in power. Criminologists firmly believe that crimes committed by wealthy individuals are overlooked in comparison to general criminals. The enforcement pyramid was designed by John Braithwaite to function as a method of uncovering the unreported and undocumented crimes involving high ranking individuals. This method uses persuasion and punishment to uncover these crimes (Dalton et al., 2017).  The fact that white-collar criminals utilize embedded law to escape persecution has proved this method ineffective in uncovering the dark figures of crime.

Inconclusion, official statistic provided a reliable source of data that can be used to measure the level of crime. These statistics have advantages to include; they are cheap and their retrieval is not time consuming. This data is credible since it has been recorded in the interest of the public. Disadvantages include the fact that they are prone and subject to misuse by journalists. Dark figures of crimes are the crimes that are unreported, therefore, they are not recorded in official statistics. These dark figures of crime can be uncovered through methods such as, self-reporting surveys, victimization surveys, enforcement pyramid and geospatial analysis. Despite the success of these methods of crime they all have been unable to uncover all the dark figures of crime due to certain limitations occurring in each method.

 

 

 

 

 

 

 

 

 

 

Bibliography

Coleman, C., & Moynihan, J. (1999). Understanding crime data: haunted by the dark figure.             Buckingham, Open University Press.

 Dalton, D., De Lint, W., & Palmer, D. (2017). Crime and justice: a guide to criminology.

Hayes, E., & Prenzler, T. (2014). Introduction to Crime and Criminology. Pearson Australia Pty Ltd. https://public.ebookcentral.proquest.com/choice/publicfullrecord.aspx?p=5133263.

Weatherburn, D., 2011. Uses and abuses of crime statistics. BOCSAR NSW Crime and Justice    Bulletins, p.16.

Yang, B., 2019. GIS Crime Mapping to Support Evidence-Based Solutions Provided by   Community-Based Organizations. Sustainability, 11(18), p.4889.

 

 

1875 Words  6 Pages

 Analysing Descriptive Statistics

A research begins with formulation of research questions, this are what helps to guide the data that will be collected for the study. Data in a research can either be qualitative which consists of verbal and narrative pieces of information or quantitative where data is transformed into numerical statistics (White et al., 2016).

Variables in a research are the concepts that the researcher is interested in for instance height weight, gender among others. Variables can be categorized into two, independent which are the variables that can be changed and controlled and dependent which are the variables that can be tested and measured in a scientific experiment (Polit et al., 2017). They can also be categorized as discrete and continuous variables. A discrete variable is that one that has a fixed amount of values amid any dual points for example, asking one how many times they have been hospitalized.  A continuous variable is that one that can take up an inestimable numeral value amid any dual points for example, ‘weight’ (White et al., 2016).

There are many techniques that are used to evaluate data, some of them include probability theory and decision theory. There are two forms of errors that can arise in the course of research; there is Type 1 error that comes about when the null hypothesis is overruled when it is indeed true (Polit et al., 2017). There is also Type II error that comes about when the null hypothesis is considered as true but is in reality incorrect (Polit et al., 2017).

There is what is known as descriptive statistics in a research, this is normally referenced in a study where data are numerical. The common descriptive statistics employed by most researchers include; Frequency Distribution, Ungrouped Frequency Distribution, Grouped Frequency Distribution, Measures of Central Tendency, Measures of Dispersion, Chi-Square Test of Independence, Practice Problem Exemplar Using Descriptive Statistics (White et al., 2016).

References

Polit, D. F., & Beck, C. T. (Eds.). (2017). Nursing research: Generating and assessing

            evidence for nursing practice (10th ed.). Philadelphia, PA: Wolters Kluwer. 

White, K. M., Dudley-Brown, S., & Terhaar, M. F. (2016). Translation of evidence into

            nursing and health care (2nd ed.). New York, NY: Springer Publishing Company.

 

372 Words  1 Pages

 Advanced statistics is a phenomenal way of harnessing information and making good of strategies with minimal utilization of resources (Palacios-Marqués et.al, 2015). Simply put, integration of workable solutions through the advanced statistics reduced competitive existing between firms due to the availability of information hence the need for an immediate level playing field and standardized pricing.

Advanced statistics works hand in hand with all sections of a business enterprise hence enhancing insight pertaining corporate tools and mechanisms (Palacios-Marqués et.al, 2015). For instance, advanced statistics helps master functions of spreadsheets and how to interpret and decode complex business data and modelling systems.

 According to scholars and researchers, the state of the market, the external forces, and operative strategies, all influence the manner in which the integrated statistics work. For instance, advanced statistics cannot influence branding, if an organization’s brand is at the top, chances are even if the rival companies implement actionable solutions via advanced statistics, and the top company will not topple over from the top as the consumers already formed a strong perception due to a goo marketing strategy (Zhang et.al, 2017). Therefore, advanced statistics tailors the needs of the firm based on the current issues facing them rather than give a direct competitive edge.

In fact, advanced statistics helps refocus the objectives of a firm based on the failures experienced (Zhang et.al, 2017). For example, if a business enterprise was facing serious financial wastage due to poor ineffective strategies, advanced statistics can help forecast the energy on factors that one need to change before the implementation of the strategy, which in turn changes the entire perspective. In addition, it is common knowledge, that advanced statistics paves way for new innovative ways of carrying out business thus revising the entire business system. Therefore, advanced statistics increases pressure on firms to change but does necessarily eliminates competition.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

References

Palacios-Marqués, D., Soto-Acosta, P., & Merigó, J. M. (2015). Analyzing the effects of technological, organizational and competition factors on Web knowledge exchange in SMEs. Telematics and Informatics, 32(1), 23-32.

Zhang, S., Wang, Z., Zhao, X., & Zhang, M. (2017). Effects of institutional support on innovation and performance: roles of dysfunctional competition. Industrial Management & Data Systems, 117(1), 50-67.

 

369 Words  1 Pages

Deming Assignment.

Introduction

Edward Deming, was one of the most intelligent statistician, who helped different companies to propel themselves to the top, through increasing productivity. He was responsible for the introduction of quality control to mass production. He worked as a lecturer in the United States, before moving to China where he became an economic consultant. While working as a consultant in Japan, he was able to achieve more, through advising most Japanese companies on how to improve their productivity, and work output, while at the same time retaining the prices of the commodities which they produced. Deming was therefore able to change the public’s perception of Japanese production, making Japanese products to become one of the best products all over the world. In order to understand the impact of Deming’s theory, this paper will discuss the reasons as to why Deming moved to the Japan rather than helping manufactures the US, why Japan needed Deming’s help and finally, the paper will discuss Deming’s prize in Japan, and its requirements.    

What made Deming to go to Japan, rather than helping the U.S?

Deming’s movement to Japan was mainly attributed to the potential which the country had, as a newly industrialized company. In the 1950’s, the U.S was one of the world’s best economic hub, a country which had been able to grow its economy (Goetsch & Davis, 2001). The rate at which the economy of the U.S was growing was very high, a move which made most investors to move to the country. This saw the U.S more than doubling the growth of its economy. However, as time went by, the cost of living became very high in the U.S, due to the high demand of goods, and the supply of goods could not match the production of the goods. This is a factor which led to inflation, since manufacturers had raise their prices, thus making the products more expensive, thus leading to slow growth of the economy in the U.S.

As an American, Deming saw Japan as a good economic hub, since the country’s economy was not at its peak. Due to that reason, Deming could be able to offer new methods of improving productivity in Japanese companies (Goetsch & Davis, 2001). This move compelled Deming to make 18 trips to Japan in order to teach businessmen his method of quality control and productivity and through statistical analysis. In addition, Deming understood the way Japanese companies operated, a factor which allowed his method to become successful particularly in Japanese companies. The nature of the Japanese, which is being patient and following the rules, made it easier for Deming to use statistical method, thus being able to improve the productivity of Japanese companies.

According to Deming, the Japanese do not compromise on quality, a factor which makes Japanese employees not to worry about job security. This is however different from the U.S, where most employees are insecure, a factor which makes it hard for them to be able to work at ease thus providing the best services for the company. Furthermore, Deming opted to go to Japan, rather than helping the US, since the US businessmen are not patient, a factor which makes it hard for them to follow a method which may not yield results after a period of one week.  Similarly, U.S managers tend to use outdated statistics and management methods, which makes it hard for most companies in the U.S to grow at a very first rate as compared to those in Japan (Goetsch & Davis, 2001).

Deming’s quality control method became effective, due to the economic effect which it had to the economics of Japan. In the mid 1950’s, Japan’s economic growth rate was at 11%, whereas the economy of the U.S was stagnating (Goetsch & Davis, 2001). This therefore showed how effective Deming’s quality management system was to the Japanese, making most American businessmen, to opt for Deming’s method, other than relying on the past statistical methods.

Why did Japan need help from Deming?

Japan urgently required help from Deming, due to the quality of products which it was producing. Companies in the Japan produced goods which were cheap and of a poor quality. Japanese major companies, did not have a control chart, a factor which made it hard for them to be able to produce high quality products. In the period of 1945-1952, GHQ placed an order of vacuum tubes to Toshiba, where the American officers demanded for control charts, but they did not understand what it really was (Goetsch & Davis, 2001). This therefore led to the need to understand what control charts were, thus forcing Deming to offer Japan with the teachings on how to come up with a control chart. In the process, Japanese companies were able to improve the quality of their products, through the use of quality statistical control method, offered by Deming.  

Deming became the saviour of Japan’s economy, since this was the period after the country had been affected by the Second World War, after losing the battle (Goetsch & Davis, 2001). Deming helped in changing the outlook of Japanese products all over the world, thus making most countries to trade with Japan, due to the quality of products which the country produced. In addition, the Japanese businessmen were eager to learn from Deming, thus enabling him to provide them with unique methods of attracting huge markets to their manufactured products. Similarly, Japan’s economy was at the verge of collapsing, thus a statistician like Deming was desperately needed to improve the economic situation of the country. 

Deming Prize in Japan and its Requirements 

Deming prize in Japan is a national quality award for industries. The award was established in the year 1951, and named after Edwards Deming by the Japanese Union of Scientists and Engineers (JUSE), after he took the statistical quality control method to Japan. Deming prize in Japan is offered to those who have made exceptional contributions to the study of total quality management (Goetsch & Davis, 2001). In Japan, Deming prize is given in two categories, namely: Deming prize for individuals, awarded to individuals, and the Deming application prize, awarded to companies which have achieved distinctive performance improvements. In order to participate for this prize, companies and individuals are required to make unique improvements in quality management.

Conclusion     

The paper has basically discussed the impact of Deming on the economy of japan, after the Second World War. In addition, the paper has also provided the reasons as to why Deming moved to Japan, instead of helping America, where the paper provided that Deming moved to Japan, after seeing the challenges which the country was facing, and the potential which it had. Finally, the paper discussed Deming prize in Japan, and its requirements, where it was evident that in order for companies to win the Deming prize, they had to make outstanding improvements to their total quality management. 

Reference

Goetsch, David L. & Davis, Stanley B. (2001). Understanding and Implementing ISO 9000 and ISO Standards 2nd Ed., Pearson ISBN:  978-0-13041106-8.

 


 

 

1185 Words  4 Pages

STATISTICS

  1. Probability

Probability mainly refers to the general measure of the likelihood that a certain event will occur. In other words, it is computed as a numeral which lies between 0 and 1 in which 0 indicates the impossibility of such an event occurring and 1 the certainty of that event occurring. Therefore, when such an event occurs it means that it had the highest probability. Mathematically, it is expressed as the number of occurrences a certain targeted event takes to occur divided by the number of occurrences together with the number of failures it has (Haigh, 2003).  Logically, the truth is that a large percentage of individuals do not have the idea about what is likely to happen. Instinctively, the mathematical theory behind it mainly deals with the patterns which occur in the randomized events. In other words, the hypothesis of probability is just a random.

Applications_ the idea of probability is used in various fields, for instance, modeling and risk assessment. For instance, markets and insurance industry utilizes actuarial science for the purpose of evaluating and determining either pricing and/or trading decisions. Similarly, various governmental organizations use it for the purpose of making financial and environmental regulation, and entitled analysis (Olofsson, 2005).

  1. Distribution

Distribution mainly refers to a function or a listing which shows all the possible intervals or values of the data as well as how often they are likely to occur. Typically, when the distribution of some of the categorical information is recognized, a percentage or the number of each group is seen in individual group/s.  Conversely, in case the general distribution of numerical information is chronologically organized, it means that they are usually reorganized from the smallest to the largest i.e. broken down into reasonable sizes and then put into charts and graphs so as to examine their center, shape, and amount of variability. Normal distribution is one of the well-known distributions. With it, the information to be gathered is basically based on numerical information which is continuous. A large percentage of its possible values usually lies on the whole real number line. When systematically organized in graphical form, its shape takes a symmetric bell-shape (Laudański, 2013).

Applications_ likewise, the concept of distribution is used in describing the mathematical principles of the probability theory as well as the science of statistics. With it, there exists variability or spread in at least all values which can be measurable in any population. For example, in quantum mechanical and kinetic property of gases, distribution is the main principle used. Because of these and other multiple reasons, simple figures are usually used for the purpose of describing the quantity, but distribution is the only accurate statistical measure (Morgan & Henrion, 1992).

  1. Uncertainty

This refers to one of the statistical parameters which is associated with the outcome of measurements, which is equally characterized by the general dispersion of values which could be reasonably be attributable to that measurement. It mainly consists of several components of which some of them are evaluated from the statistical distribution of outcomes which can be differentiated through a series of measurements and experimented using standard deviations (Morgan & Henrion, 1992).

Applications_ uncertainty is used in various fields for example stock markets, gambling, water forecasting, scientific modeling, quantum mechanics, and the general verification and validation of mechanical modeling.

  1. Sampling

Sampling refers to the process which is used in statistical analysis through which a determined number of events are largely taken from the larger population. Thus, the methodology which is utilized in sampling relies on the type of analysis which is being undertaken but might include systematic or simple random sampling.  In other words, the sample used should have the capacity of representing the whole population. It is therefore essential to take into consideration the sample which is to be selected (Bart et al., 2000).

Applications- sampling assists in selecting the appropriate data points for the entire set so as to assist in estimating the general characteristics of the whole population. For example, in manufacturing, different types of sensory information i.e. current, pressure, vibration, controller, and voltage are used for the purpose of enhancing short time intervals.

  1. Statistical Inference

This statistical component refers to the act of drawing conclusions regarding the scientific or populations’ truths from the information or data being collected. There exist several methods of performing statistical inferences, for example, statistical modeling, explicit randomization, and design analysis, and information oriented strategies (Panik, 2012).

Applications _ in most cases, statistical inferences are utilized in the equi-energy sampler, temperature domain methods and so on. The main focus on this enhances efficient sampling as well as quantified statistical estimations, i.e. micro-canonical averages in the statistical mechanics. 

  1. Regression Analysis

In statistics, an outlier is regarded as being an observation point which is ultimately distinct or far-away from other observations. This implies that any outlier might be as a result of variability in their measurements or can be used for the purpose of indicating the experimental error. Therefore, outliers are at times termed as being influential observations because they usually lie from the observational distance from the other set of data in a random sample. In other words, it affects the slope of the statistical line (Constantin, 2017).

Applications_ regression analysis is used in various fields but the most prominent one is in estimating the relationship which exists between single and independent variables; for instance, the relationship between rainfall and crop yields.  

  1. Time series

Time series basically refers to the series of information or data points which have been indexed or listed in timely order. In most cases, it is a time sequence which is taken at a consecutive evenly spaced time. This is to say that it is a sequence of discrete time information or data. In this process, information always emanates from the process of tracking or monitoring industrial business metrics. With it, the general analysis mainly consists of various methods which are used for the purpose of analyzing information so as to enable extracting meaningful data as well as other characteristics of such an information (Tianqi et al., 2017). Likewise, this statistical analysis mostly employs a number of models which has the ability of foretelling future events depending on the data being accessed. Equally, an interrupted time series typically implies the general analysis of interventions on a particular time series.

Applications_ in statistics, time series is used for the purpose of obtaining an understanding of the structures and forces which generates an observed information or data.

  1. Forecasting methods

Forecasting methods in statistics refer to the act of making predictions about the future events based on the present and past experiences. With this statistical method, the general selection of something mainly relies of various factors. These factors include the significance on the historical information, the general context of the forecast, the extent of the accuracy which is being desired, and the time which is readily available to make such an analysis (Farrow et al., 2017).

Applications_ a common example of this is when estimating some variables of interest regarding certain future information. Although forecasting methods which are used are the same, all of them end up referring to the formal statistics. The main reason for that is because all of them are similar and they make use of the same statistical methods. Their usage equally differs between several areas of applications.

  1. Optimization

Optimization typically refers to the ultimate selection of the best element with respect to some criteria using a set of alternatives. In other words, it is the process of adjusting something with the intention of optimizing a certain set of parameters without infringing some constant/s. The most common of all is the minimization of costs as well as the maximization of throughput and efficiency.  As one of the industrial decision-making tool, the main objective is to maximize more than one process, while keeping other parameters constant (Lange, 2013).

 Applications _ principally, the parameters of optimization are mainly used in control optimization, operating procedures, and equipment optimization. For instance, in equipment optimization, the objective is to validate the available equipment which is being utilized to the fullest through scrutinizing operating information so as to recognize equipment bottlenecks.  Similarly, operating procedures can end up varying with time hence optimization is used for the purpose of automating the day-to-day operating capacity of the company. In the process of controlling plant optimization i.e. in chemical and oil refinery industries, the management authority had already realized that there existed several control loops. Because of that, optimization enabled each loop to be utilized for the essence of controlling operational processes (Rustagi, 1994).

  1. Decision Tree Modeling

Decision Tree Modeling refers to the model of communication or computation through which a communication or algorithm process is regarded as being a decision tree.  In other words, this is to imply that it is a sequence of various branching operations which are based on the comparison of various quantities which are assigned to computational cost. This statistical method, thus, assists in classifying a certain population into branch-like segment/s which aids in constructing an inverted tree with leaf nodes, internal nodes, and root nodes. In this method, the algorithm is typically a non-parameter which can effectively handle complicated, large datasets without inducing complicated parametric structure/s. In case the sample size used are relatively large, it means that the amount of variables that are routinely assessed or evaluated could have increased significantly with the general advancement of storing and retrieving electronic information (Moussa et al., 2006).

Applications_ in medicine, the numbers of variables which are routinely evaluated have increased considerably with the advancement of storing electronic information. Conversely, the decision tree modeling is utilized in deriving historical information. With it, it is easier to predict or determine the outcome for future records. In data manipulation, it aids in the retrieving several categorical information or skewed continuous information during medical research. During this situation, the decision tree models are used in deciding how efficient to collapse definite variable or data into manageable numbers (Bengio et al., 2010). In other words, it is one of the powerful tools which are used in statistics for the purpose of classifying, predicting, interpreting, and manipulating information which has multiple applications, particularly in medical research.

 

 

 

 

 

 

 

 

 

References

Bart, J., Notz, W. I., & Fligner, M. A. (2000). Sampling and statistical methods for behavioral ecologists. Cambridge: Univ. Press.

BENGIO, Y., DELALLEAU, O., & SIMARD, C. (2010). DECISION TREES DO NOT GENERALIZE TO NEW VARIATIONS. Computational Intelligence, 26(4), 449-467. doi:10.1111/j.1467-8640.2010.00366.x

CONSTANTIN, C. (2017). Using the Regression Model in multivariate data analysis. Bulletin Of The Transilvania University Of Brasov. Series V: Economic Sciences, 10(1), 27-34.

Farrow, D. C., Brooks, L. C., Hyun, S., Tibshirani, R. J., Burke, D. S., & Rosenfeld, R. (2017). A human judgment approach to epidemiological forecasting. Plos Computational Biology, 13(3), 1-19. doi:10.1371/journal.pcbi.1005248

Haigh, J. (2003). Taking chances: Winning with probability ; [including who wants to be a millionaire? and the weakest link]. Oxford [u.a.: Oxford Univ. Press.

Lange, K. (2013). Optimization. New York: Springer Press

Laudański, L. M. (2013). Between Certainty and Uncertainty: Statistics and Probability in Five Units with Notes on Historical Origins and Illustrative Numerical Examples. Berlin, Heidelberg: Springer.

Morgan, M. G., & Henrion, M. (1992). Uncertainty: A guide to dealing with uncertainty in quantitative risk and policy analysis. Cambridge [u.a.: Univ. Press.

Moussa, M., Ruwanpura, J., & Jergeas, G. (2006). Decision Tree Modeling Using Integrated Multilevel Stochastic Networks. Journal Of Construction Engineering & Management, 132(12), 1254-1266. doi:10.1061/(ASCE)0733-9364(2006)132:12(1254)

Olofsson, P. (2005). Probability, statistics, and stochastic processes. Hoboken, N.J: Wiley-Interscience.

Panik, M. J. (2012). Statistical inference: A short course. Hoboken, N.J: Wiley.

Rustagi, J. S. (1994). Optimization techniques in statistics. Boston: Academic Press.

Tianqi, C., Sarnat, S. E., Grundstein, A. J., Winquist, A., & Chang, H. H. (2017). Time-series Analysis of Heat Waves and Emergency Department Visits in Atlanta, 1993 to 2012. Environmental Health Perspectives, 125(5), 1-9. doi:10.1289/EHP44

 

 

 

1994 Words  7 Pages

Page 1 of 6

Get in Touch

If you have any questions or suggestions, please feel free to inform us and we will gladly take care of it.

Email us at support@edudorm.com Discounts

LOGIN
Busy loading action
  Working. Please Wait...