Tuesday, January 28, 2020

Exploring Optimal Levels of Data Filtering

Exploring Optimal Levels of Data Filtering It is customary to filter raw financial data by removing erroneous observations or outliers before conducting any analysis on it. In fact, it is often one of the first steps undertaken in empirical financial research to improve the quality of raw data to avoid incorrect conclusions. However, filtering of financial data can be quite complicated not just because of the reliability of the plethora of data sources, complexity of the quoted information and the many different statistical properties of the variables but most importantly because of the reason behind the existence of each identified outlier in the data. Some outliers may be driven by extreme events which have an economic reason like a merger, takeover bid, global financial crises etc. rather than a data error. Under filtering can lead to inclusion of erroneous observations (data error) caused by technical (e.g. computer system failure) or human error (e.g. unintentional human error like typing mistake or intentional human err or like producing dummy quotes for testing).[1] Likewise, over filtering can also lead to wrong conclusions by deleting outliers motivated by extreme events which are important to the analysis. Thus, the question of the right amount of filtering of financial data, albeit subjective, is quite important to improve the conclusions from empirical research. In an attempt to somewhat answer this question, this seminar paper aims to explore the optimal level of data filtering.[2] The analysis conducted in this paper was on the Xetra Intraday data provided by the University of Mannheim. This time-sorted data for the entire Xetra universe had been extracted from the Deutsche Bà ¶rse Group. The data consisted of the historical CDAX components that had been collected from Data stream, Bloomberg and CDAX. Bloombergs corporate actions calendar had been used to track dates of IPO listing, delisting and ISIN changes of companies. Corporations not covered by Bloomberg had been tracked manually. Even though few basic filters had been applied (for e.g. dropping negative observations for spread/depth/volume), some of which were replicated from Market Microstructure Database File, the data remained largely raw. The variables in the data had been calculated for each day and the data aggregated to daily data points.[3] The whole analysis was conducted using the statistical software STATA. The following variables were taken into consideration for the purpose of identifying outliers, as commonly done in empirical research: Depth = depth_trade_value Trading volume = trade_vol_sum Quoted bid-ask spread = quoted_trade_value Effective bid-ask spread = effective_trade_value Closing quote midpoint returns, which were calculated by applying Hussain (2011) approach: rt = 100*(log (Pt) log (Pt1)) Hence, closing_quote_midpoint_rlg = 100*log(closing_quote_midpoint(n)) log(closing_quote_midpoint(n-1)). Where closing_quote_midpoint = (closing_ask_price+ closing_bid_price)/2 Our sample consisted of the first fifteen hundred and ninety five observations, out of which two hundred observations were outliers. Only the first two hundred outliers were analyzed (on a stock basis chronologically) and classified as either data errors or extreme events. These outliers were associated with two companies: 313 Music JWP AG and 3U Holding AG. Alternatively, a different approach could have been used to select the sample to include more companies but the basics of how filters work should be independent of the sample selected for the filter to be free of any biases so for instance if a filter is robust, it should perform relatively well on any stock or sample. It should be noted that we did not include any bankrupt companies in our sample as those stocks are beyond the scope of this paper. Moreover, since we selected the sample chronologically on a stock basis, we were able to analyze the impact of these filters more thoroughly on even the non-outlier observations in the sample, which we believe is an important point to consider when deciding the optimal level of filtering. Our inevitably somewhat subjective definition of an outlier was: Any observation lying outside the 1st and the 99th percentile of each variable on a stock basis The idea behind this was to classify only the most extreme values for each variable of interest as an outlier. The reason why the outliers were identified on a per stock basis rather than the whole data was because the data consisted of many different stocks with greatly varying levels of each variable of interest for e.g. the 99% percentile of volume for one stock might be seventy thousand trades, while that of another might be three fifty thousand trades and so any observations with eighty thousand trades in both stocks might be too extreme for the first stock but completely normal for the second one. Hence, if we identified outliers (outside the 1st and the 99th percentile) for each variable of interest on the whole data, we would be ignoring the unique properties of each stock which might result in under or over filtering depending on the properties of the stock in question. An outlier could either be the result of a data error or an extreme event. A data error was defined using Dacorogna (2008) definition: An outlier that does not conform to the actual condition of the market The ninety four observations in the selected sample with missing values for any of the variables of interest were also classified as data errors.[4] Alternatively, we could have ignored the missing values completely by dropping them from the analysis but the reason why they were included in this paper was because if they exist in the data sample, the researcher has to deal with them by deciding whether to consider them as data errors, which are to be removed through filters or change them for e.g. to the preceding value and hence it might be of value to see how various filters interact with them. An extreme event was defined as: An outlier backed by economic, social or legal reasons such as a merger, global financial crises, share buyback, major law suit etc. The outliers were identified, classified and analyzed in this paper using the following procedure: Firstly, the intraday data was sorted on a stock-date basis. Observations without an instrument name were dropped. This was followed by creating variables for the 1st and 99th percentile value for each stocks closing quote midpoint returns, depth, trading volume, quoted and effective bid-ask spread and subsequently dummy variables for outliers. Secondly, after taking the company name and month of the first two hundred outliers, while keeping in consideration a filtering window of about one week, it was checked on Google if these outliers were probably caused by extreme events or the result of data errors and classified accordingly using a dummy variable. Thirdly, different filters which are used in financial literature for cleaning data before analysis were applied one by one in the next section and a comparison was made on how well each filter performed i.e. how many probable data erro rs were filtered out as opposed to outliers probably caused by extreme events. These filters were chosen on the basis of how commonly they are used for cleaning financial data and some of the popular ones were selected. 4.1. Rule of Thumb One of the most widely used methods of filtering is to use some rule of thumb to remove observations that are too extreme to possibly be accurate. Many studies use different rules of thumb, some more arbitrary than others.[5] Few of these rules were taken from famous papers on market microstructure and their impact on outliers was analyzed. For e.g.: 4.1.1. Quoted and Effective Spread Filter In the paper Market Liquidity and Trading Activity, Chordia et al (2000) filter out data by looking at effective and quoted spread to remove observations that they believe are caused by key-punching errors.   This method involved dropping observations with: Quoted Spread > à ¢Ã¢â‚¬Å¡Ã‚ ¬5 Effective Spread/Quoted spread > 4.0 % Effective Spread/%Quoted Spread > 4.0 Quoted Spread/Transaction Price > 0.4 Using the above filters resulted in the identification and consequent dropping of 61.5% of observations classified as probable data errors, whereas none of the observations classified as probable extreme events were filtered out. Thus, these spread filter looks very promising as a reasonably large portion of probable data errors was removed while none of the probable extreme events were dropped. The reason why these filters produced good results was because it looked at the individual values of quoted and effective spread and removed the ones that did not make sense logically rather than just removing values from the tails of the distribution for each variable. It should be noted that these filters removed all the ninety four missing values, which means that only five data errors were detected in addition to the detection of all the missing values. If we were to drop all the missing value observations before applying this method, it would have helped filter out only 7.5%[6] of probab le data errors while not dropping any probable extreme values. Thus, this method yields good results and should be included in the data cleaning process. Perhaps, using this filter in conjunction with a logical threshold filter for depth, trading volume and returns might yield optimal results. 4.1.2. Absolute Returns Filter Researchers are also known to drop absolute returns if they are above a certain threshold/ return window in the process of data cleaning. This threshold is subjective depending on the distribution of returns, varying from one study to another for e.g. HS use 10% threshold, Chung et al. 25% and Bessembinder 50%.[7] In case of this paper, we decided to drop (absolute) closing quote midpoint returns > |20%|. Perhaps, a graphical representation of time series returns of 313music JWP 3U Holding can be used to explain why this particular threshold was chosen. Figure 1. Scatter plot of closing quote midpoint return and date As seen in the graph, most of the observations for returns lie between -20% and 20%. However, applying this filter did not yield the best results as only 2.5% of probable data errors were filtered out as opposed to 10.3% probable extreme events from our sample. Therefore, this filter applied in isolation doesnt really seem to hold much value. Perhaps, an improvement to this filter could be achieved by only dropping returns which are extreme but reversed[8] within the next few days as this is indicative of data error. For e.g. if T1 return= 5%, T2 return= 21% and T3 return=7%, we can tell that in T3 returns were reversed, indicating that T2 returns might have been the result of a data error. This filter was implemented by only dropping return values > |20%| which in the next day or two, reverted back to the value of return, +/- 3%[9]of the day before the outlier occurred as shown below: r(_n)> |20%| |r(n-1) -r(n+1)| |r(n-1) -r(n+2)| Where r(_n) is closing quote midpoint return on any given day. This additional filter seemed to work as it prevented the filtering out of any probable extreme events. However, the percentage of filtered data errors from our sample fell from 2.5% to 1.9%. In conclusion, it makes sense to use this second return filter which accounts for reversals in conjunction with other filters for e.g. spread filter. Perhaps, this method can be further improved by using a somewhat more objective range for determining price reversals or an improved algorithm for identifying return reversals. 4.1.3. Price Filter We constructed a price filter inspired by the Brownlees Gallo (2006) approach. The notion behind this filter is to gauge the validity of any transaction price based on its comparative distance to the neighboring prices. An outlier was identified using the following algorithm: | pi -   ÃŽÂ ¼ | > 3*à Ã†â€™ Where pi is the log of daily transaction price, the reason why logarithmic transformation was used is because the standard deviation method assumes a normal distribution.[10] ÃŽÂ ¼ is the stock sorted mean and à Ã†â€™ is the stock sorted standard deviation of log daily prices. The reason why we chose the stock sorted mean and standard deviation was that the range of prices vary greatly in our data set from one stock to another, hence, it made sense to look at each stocks individual price mean as an estimate of neighboring prices. This resulted in filtering 56.5% of probable data errors which were all missing values. Thus, this filter doesnt seem to hold any real value when used in conjunction with a missing value filter. Perhaps, using a better algorithm for identifying the mean price of the closest neighbors might yield optimal results. 4.2. Winsorization and Trimming A very popular filtering method used in financial literature is trimming or winsorization. According to Green Martin (2015a), p. 8, if we want to winsorize the variables of interest at ÃŽÂ ±%, we must replace the nÃŽÂ ± largest values by the nÃŽÂ ± upper quantile of the data, and the nÃŽÂ ± smallest values by the nÃŽÂ ± lower quantile of the data. Whereas, if we want to trim the variables of interest by ÃŽÂ ±%, we should simply drop observations outside the range of ÃŽÂ ±% to 1- ÃŽÂ ±%. Thus, winsorization only reduces extreme observations rather than dropping them completely like trimming. For the purpose this paper, both methods will have similar impacts on dropping outliers outside certain ÃŽÂ ±%, hence, we will only analyze winsorization in detail. However, winsorization introduces an artificial structure[11] to the dataset because instead of dropping outliers it changes them, therefore, if this research was to be taken a step further for e.g. to condu ct robust regressions, choosing one method over the other would depend entirely on the kind of research being conducted. The matter of how much to winsorize the variables, is completely arbitrary,10 however, it is a common practice in empirical finance to winsorize each tail of the distribution at 1% or 0.5%.5 We first winsorized the variables of interest at the 1% level, on a stock basis, which led to limiting 100% of probable extreme events and only 42.9% of probable data errors. Even though intuitively it would make sense for all the identified outliers to be limited because the method used for identifying outliers for each variable considered observations which were either greater than the 99th percentile or less than the 1st percentile, and winsorizing the data at the same level should mean that all the outliers would be limited. However, this inconsistency in expectation and outcome results from the existence of missing values winsorization only limits the extreme values in the data, overlooking the missing observations which have been included in data errors. We then winsorized the variables of interest at a more stringent level i.e. 0.5%, on a stock basis, which led to 51.3% of the identified data errors and 18.6% of probable extreme events to be limited which doesnt exactly seem ideal as in addition to data errors, quite a large portion of extreme events identified was also filtered out. Taking this analysis a step further, the variables of interest were also winsorized on the whole data (which is also commonly done) as opposed to on a per stock basis, at the 0.5% and 1% level. Winsorizing at the 1% level led to limiting 51% extreme events, 24.2% data errors and an additional one thirty four observations in the sample not identified as outliers. This points toward over filtering. Doing it at the 0.5% level led to limiting 28% extreme events, 12.4% data errors and an additional seven observations in the sample not identified as outliers. Thus, it seems that no matter which level (1% or 0.5%) we winsorize on or whether we do it on a per stock basis or on the whole data, a considerable percentage of probable extreme events is filtered out. Of course, our definition of an outlier should also be taken into consideration when analyzing this filter. Winsorizing on a per stock basis does not yield very meaningful results as it clashes with our outlier definition. However, doing it on the whole data should not clash with this definition as we identify outliers outside the 1st and the 99th percentile of each variable on the data as a whole. Regardless, this filter doesnt yield optimal results as a substantial portion of probable extreme events get filtered out. This is because this technique doesnt define boundaries for the variables logically like the rule of thumb method, rather it inherently assumes that all outliers outside a pre-defined percentile must be evened out and outliers caused by extreme events dont necessarily lie within the defined boundary. It must also be noted that the winsorization filter does not limit missing values which are also clas sified as data errors in this paper. Thus, our analysis indicates that this filter might be weak if we are interested in retaining the maximum amount of probable extreme events. Perhaps, using it with an additional filter for limiting missing values might yield a better solution if the researcher is willing to drop probable extreme events for the sake of dropping probable data errors. 4.3. Standard Deviations Logarithmic transformation Many financial papers also use a filter based on x times the standard deviation: xi > ÃŽÂ ¼ + x*à Ã†â€™ xi x* à Ã†â€™ Where xi is any given observation of the variable of interest, ÃŽÂ ¼ is the variable mean and à Ã†â€™ is variable standard deviation.[12] An example would be Goodhart and Figliuoli (1991) who use a filter based on four times the standard deviation.[13] However, this method assumes a normal distribution, 9 so problems might arise with distributions that are not normal and in our data set, except for returns (because we calculated them using log), the rest of the distributions for depth, trading volume, effective and quoted bid-ask spread are not normally distributed. Therefore, we first log transformed the latter four distributions using: y = log (x)[14] Where y is the log transformed function and x is the original function. The before and after graphs, using log transformation are shown in Exhibit 4. We then dropped observations for all the log transformed variables that were greater than Mean + x*Standard Deviation or less than Mean x*Standard Deviation, first on a stock basis and then on the whole data for values of x=4 and x=6. Applying this filter at the x=6 level on a stock basis seemed to yield better results than applying it at the x=4 level. This is because x=6 led to dropping 25.6% less probable extreme events for a negligible 3.1% fall in dropping probable data errors. The outcomes are shown in Exhibit 3. However, upon further investigation, we found that 100% of the probable data errors identified by the standard deviation filter at the x=6 level were all missing values. This means that if we dropped all missing values before applying this filter at this level, our results would be very different as this filter would be dropping 7.7% extreme events for no drop in data errors. Applying this filter on the whole data led to the removal of less outlier than applying it on a per stock basis. Using the x=6 level (whole data) appeared to yield the best results 58.4% of probable data errors were filtered out while no probable extreme events were dropped. For more detailed results, refer to Exhibit 3. However, even in this case, 100% of the probable data errors identified were missing values. This means that if we were to drop all missing values before applying this filter, this filter would identify 0% of the probable extreme events or probable data errors. Thus, the question arises if we are actually over filtering at this level? If yes, then should x Data cleaning is an extremely arbitrary process which makes it quite impossible to objectively decide the level of optimal filtering, which is perhaps, the reason behind limited research in this area. This limitation of research in this particular field and inevitably this paper should be noted. That being said, even though some filters chosen were more arbitrary than others, we have made an attempt to objectively analyze the impact of each filter applied. The issue of missing values for any of the variables should be taken into consideration because they are data errors and if we were to ignore them, they would distort our analysis because they interact with the various filters applied. Alternatively, we could have dropped them before starting our analysis, but we dont know if researchers would choose to change them to the closest value for instance or filter them out, therefore, its interesting to see how the filters interact with them. Our analysis indicates that when it comes to the optimal amount of data cleaning, rule of thumb filters fare better than statistical filters like trimming, winsorization and the standard deviation method. This is because statistical filters assume that any extreme value outside a specified window must be a data error and should be filtered out but as our analysis indicates, extreme events dont necessarily lie within this specified window. On the other hand, rule of thumb filters set logical thresholds, rather than just removing/limiting observations from each tail of the distribution. The outcomes of different filters which are shown in exhibit 1, 2 and 3 are represented graphically below. Figure 2. Box plot of outcomes of all the data cleaning methods As shown in section 4.2 and the graph above, Winsorization whether on a stock basis or on the whole data, tends to filter out a large portion of probable extreme events. Thus, it is not a robust filter if we want to retain maximum probable extreme events and should be probably avoided if possible. As far as the standard deviation filter is concerned, as shown in section 4.3, applying it at the x = 6 level, whether on a per stock or whole data basis, seems to perform well but it is not of much value if combined with a missing values filter and all other scenarios tested, actually dropped more probable extreme events than data errors. Therefore, it is not advisable to simply drop outliers existing at the tails of distributions without understanding the cause behind their existence. This leaves us with the rule of thumb filters. We combined the filters that performed optimally spread and additional return filter which accounts for reversals, along with a filter for removing the missing values. This resulted in dropping one hundred and two i.e. 63.4% of all probable data errors without removing any probable extreme events. At this point, a payoff has been made: in order to not drop any probable extreme events, we have foregone dropping some extra probable data errors because over scrubbing is a serious form of risk.[15] This highlights the struggle of optimal data cleaning, because researchers often dont have the time to check the reason behind the occurrence of an outlier, they end up removing probable extreme events in the quest to drop probable data errors. Thus, the researcher has to first determine what optimal filtering really means to him does it mean not dropping any probable extreme events albeit at the expense of keeping some data errors like done in this paper, or does it mean giving precedence to dropping maximum amount of data errors, albeit at the expense of dropping probable extreme events? In the latter case, statistical filters like trimming, win sorization and standard deviation method should also be carefully used. The limitations of this paper should also be recognized. Firstly, only two hundred outliers were analyzed due to time constraint, maybe, future research in the area can look at a larger sample to get more insightful results. Secondly, other variables can also be looked at in addition to depth, volume, spread and returns and more popular filters can be applied and tested on them. Moreover, a different definition can be used to define an outlier or to select the sample for e.g. the two hundred outliers could have been selected randomly or based on their level of extremeness but close attention must be paid to avoid sample biases. Future research in this field should perhaps, also focus on developing more objective filters and method of classifying outliers as probable extreme events. It should also look into the impact of using the above[16]two approaches of optimal filtering on the results of empirical research for e.g. on robust regressions, to verify which approach of optimal filtering performs the best. Table 1: Outcome of Rule of Thumb Filters Applied Table 2: Outcome of Winsorization Filters Applied Table 3: Outcome of Standard Deviation Filters Applied Figure 3: Kernel Distribution before and after log transformation 3.1 Depth 3.2 Effective Spread 3.3 Quoted Spread 3.4 Volume   Ãƒâ€šÃ‚   Figure 4. Kernel Distribution before and after log transformation of transaction price   Ãƒâ€šÃ‚   References Bollerslev, T./Hood, B./Huss, J./Pedersen, L. (2016): Risk Everywhere: Modeling and Managing Volatility, Duke University, Working Paper, p. 59. Brownlees, C. T/Gallo, M. G. (2006): Financial Econometric Analysis at Ultra-High Frequency: Data Handling Concerns. SSRN Electronic Journal, p. 6 Chordia, T./Roll, R./Subrahmanyam, A (2000): Market Liquidity and Trading Activity, SSRN Electronic Journal 5, p. 5 Dacorogna, M./Mà ¼ller U./Nagler R./Olsen R./Pictet, O (1993): A geographical model for the daily and weekly seasonal volatility in the foreign exchange market, Journal of International Money and Finance, p. 83-84 Dacorogna, M (2008): An introduction to high-frequency finance, Academic Press, San Diego, p. 85 Eckbo, B. E. (2008): Handbook of Empirical Corporate Finance SET, Google Books, p. 172 https://books.google.co.uk/books?isbn=0080559565 Falkenberry, T. N. (2002): High Frequency Data Filtering, S3 Amazon, https://s3-us-west-2.amazonaws.com/tick-data-s3/pdf/Tick_Data_Filtering_White_Paper.pdf Goodhart, C./Figliuoli, L. (1991): Every minute counts in financial markets, Journal of International Money and Finance 10.1 Green, C. G./Martin D. (2015): Diagnosing the Presence of Multivariate Outliers in Fundamental Factor Data using Calibrated Robust Mahalanobis Distances. University of Washington, Working paper, p. 2, 8 Hussain, S. M (2011): The Intraday Behaviour of Bid-Ask Spreads, Trading Volume and Return Volatility: Evidence from DAX30, International Journal of Economics and Finance, p. 2 Laurent, A. G. (1963): The Lognormal Distribution and the Translation Method: Description and Estimation Problems. Journal of the American Statistical Association, p. 1 Leys, C./Klein O./Bernard P./Licata L. (2013):   Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median, Journal of Experimental Social Psychology, p. 764 Scharnowski, S. (2016): Extreme Event or Data Error?, Presentation of Seminar Topics (Market Microstructure), Mannheim, Presentation Seo, S. (2006): A Review and Comparison of Methods for Detecting Outliers in Univariate Data Sets, University of Pittsburg, Thesis, p. 6 Verousis, T./Gwilym O. (2010): An improved algorithm for cleaning Ultra High-Frequency data, Journal of Derivativ

Monday, January 20, 2020

Ethnography :: Anthropology Culture Reflexivity Papers

Ethnography Personal experience and reflexivity should be used within anthropology as a tool to reflect on the culture that is being studied and not a refocusing of attention on the self. Works such as Dorinne Kondo’s â€Å"Dissolution and Reconstitution of Self,† use the idea of reflexivity as a mirror in which to view the culture being studied in a different manner. This use of reflexivity allows for the focus to stay on the culture being studied. A move away from this is the new branch of humanistic anthropology represented in this essay by Renato Rosaldo’s â€Å"Grief and a Headhunter’s Rage† and Ruth Behar’s â€Å"Anthropology that Breaks Your Heart† allows anthropologists to use reflexivity as a way to explore universal human feelings. For me, this is not the study of anthropology as much as self-reflexive psychology. The focus shifts from culture to self. The anthropologists completely understands the feelings of the people he/she is studying. I think that it is rather ambitious to state that emotion is univeral, and I do not think that it is the job of anthropologists to do so. The reflexive voice is a necessary aspect of ethnographic writing, but the anthropologist must be careful not to shift focus from concentrating on culture to concentrating on herself. Dorinne Kondo does an excellent job in her essay â€Å"Dissolution and Reconstitution of Self†in using the reflexive voice as a way in which to describe culture and break down the observer/Other dichotomy by giving agency and power to her informants. Not only does the anthropologist interpret the people, but the people give their interpretation of the anthropologist. She states: I emphasize here the collusion between all parties involved, for it is important to recognize the ways in which informmants are also actors and agents, and that the negotiation of reality that takes place in the doing of ethnography involves complex and shifting relations of power in which the ethnographrapher acts and is also acted upon. (Kondo 75) Kondo acknowledges the affect that the Japanese have on her character and by so doing she acknowledges their power. Instead of standing in the place of supreme authority, the anthropologist, by using reflexivity, can give the authority to her informants. Not only was she reflexively examining her positioning and the affect it would have on her informants, but she also looks at the affect that her informants have on her while still centering her discussion on the culture being studied.

Saturday, January 11, 2020

Nonprofit versus For-Profit Healthcare and Organizations Essay

Abstract This paper explores articles and research conducted on nonprofit versus For-Profit Healthcare and Organizations. There are three types of entities that own hospitals, which are: nonprofit, for-profit, and government. However, it can’t be determined if they specialize in different medical services or how their service profits affect certain specializations. More than likely, the for-profits offer profitable medical services that benefit the organization, which would lead to believe that the nonprofits are in the middle, leaving the government with offering the unprofitable services. The for-profits are also quite responsive to the changes associated with service profitability than the nonprofit or government entities. Therefore, it would be necessary to evaluate the value of nonprofit hospital ownership and differentiate between the service offerings amongst the hospital types. Looking into the ways that for-profit hospitals make profits, it would be necessary to take into consideration the geographical location versus the well insured citizens that are located within the area. This paper also looks into the assumption that all general hospitals are relatively alike in the services provided, regardless of ownership†¦. but also that these entities would vary in their patient mixture. In my research, this paper is for the recognition of profit making and to introduce the idea that for-profit healthcare and organizations are more opt to decide on which medical services to offer based on the service profitability. In our country nonprofit hospitals account for a major portion of the urban areas, while the remainder being for-profit or governmental ownership operating under different legal rules. When we evaluate the interests associated within the healthcare industry, we must take into consideration the value it has on today’s society and economy. This issue has been heavily debated in that there have been raising questions as to the fact of the similarities between non-profit organizations and for-profit organizations. In analyzing these issues, it must also show relevance as to the accountability of the evidence and material that supports the policies regarding ownership. From our standpoint here in the United States, hospitals take the foremost credit as being the largest healthcare organization in the country. When we look into classification however, it can be noted that private hospitals have the ability to be classed as for-profit or non-profit organizations due to independent regulatory rules that separate the two. From a non-profit standpoint, these type organizations are not required to pay sales, income, or property tax. And to further introduce the non-profit organizations, it is within reason to understand that they were established with the intention of providing specific social services to meet the needs of poor citizens. For this reason, is why not-for-profit healthcare and those hospitals and organizations that are associated within its boundaries†¦ are exempt from paying taxes. This is a major factor as to how and why these type organizations operate. When realizing the importance of non-profit healthcare and organizations, it is fair to say†¦ that although they are deemed to be prestigious organizations, they are often not regarded as such. For-profit, nonprofit, and governmental organizations operate under different legal rules. These rules would explain how profits are shared and distributed to shareholders in for-profit organizations, and how government and nonprofit hospitals are tax exempt. Although these rules impact operations, they provide the basis as to the similarity in healthcare services rendered†¦ contracting with the same insurers and government payers†¦ operating under the same healthcare regulations†¦ and employ with similar if not the same training and ethical obligations. Just because an organization may be for-profit, does not mean that they traditionally provide lower quality services and higher costs. However, in some cases where this is a factor, it causes a change in operations in that it creates a negative effect on the availability of healthcare. Nonprofit organizations such as hospitals, often switch to for-profit due to the issues related with their financial instability to operate in that status and remain open. This change allows them to improve their financial standings, reduce Medicare costs, and generate higher revenues. It also allows the investors and shareholders to have a bigger impact on operations and funding. Due to the rising high costs associated with healthcare, the United States has had an increase in the amount of nonprofit healthcare organizations converting over to for-profit. Since those changes have been made, it has allowed more facilities to remain open, continue offering healthcare to citizens, and functioning to service communities. This also shows face as to why and how nonprofit healthcare organizations offset costs by charging more to their patients that have the ability to pay for services. On the other hand, for-profit healthcare organizations exploit these means as a profitable turnkey business necessity. However in this case, it makes the profits visible which in turn keep costs down for all patients, and not differentiate between social status. In conclusion, when evaluating avenues for improvement of the financial and operational performance of nonprofit healthcare organizations, it is impertinent that these organizations monitor the contributions required to be made in order to operate under the tax exempt status. When this does not occur, is when fines, closures, and investigations take place ultimately contributing to additional costs and substandard performance. Just as well when evaluating avenues for improvement of the financial and operational performance of for-profit healthcare organizations, it is impertinent that the options provided to citizens covered by healthcare plans, are up to standards. In doing so, they are provided the best care at reasonable costs due to donations, stockholders, and board members that have a particular interest in the care of the citizens which reflects on the success of the organization and the level of care given. References Anika Clark (2012). Nonprofit vs. for-profit health care: Debate hits home. [ONLINE] Available at: http://www.southcoasttoday.com/apps/pbcs.dll/article?AID=/20120503/NEWS/205030347. [Last Accessed Nov. 20, 2012]. Steven Hill (2011). Non-Profit vs. For-Profit health care: How to Win the Looming Battle Over Cost Control.

Friday, January 3, 2020

Biography of Kate Chase Sprague, Political Daughter

Kate Chase Sprague (born Catherine Jane Chase; August 13, 1840–July 31, 1899) was a society hostess during the Civil War years in Washington, D.C. She was celebrated for her beauty, intellect, and political savvy. Her father was Secretary of the Treasury Salmon P. Chase, part of President Abraham Lincolns Team of Rivals, and later served as secretary of state and chief justice of the United States Supreme Court. Kate helped promote her fathers political ambitions before she became embroiled in a scandalous marriage and divorce. Fast Facts: Kate Chase Sprague Known For:  Socialite, daughter of a prominent politician, embroiled in a scandalous marriage and divorceAlso Known As:  Kate Chase, Katherine ChaseBorn:  August 13, 1840 in Cincinnati, OhioParents: Salmon Portland Chase and Eliza Ann Smith ChaseDied:  July 31, 1899 in Washington, D.C.Education: Miss Haines School, Lewis Heyl’s SeminarySpouse: William SpragueChildren: William, Ethel, Portia, Catherine (or Kitty)Notable Quote: â€Å"Mrs. Lincoln was piqued that I did not remain at Columbus to see her, and I have always felt that this was the chief reason why she did not like me at Washington.† Early Life Kate Chase was born in Cincinnati, Ohio, on August 13, 1840.  Her father was Salmon P. Chase and her mother was Eliza Ann Smith, his second wife.   In 1845, Kate’s mother died, and her father remarried the next year.  He had another daughter, Nettie, with his third wife Sarah Ludlow. Kate was jealous of her stepmother and so her father sent her to the fashionable and rigorous Miss Haines School in New York City in 1846.  Kate graduated in 1856 and returned to Columbus. Ohio’s First Lady In 1849 while Kate was at school, her father was elected to the U.S. Senate as a representative of the Free Soil Party.  His third wife died in 1852, and in 1856 he was elected as Ohio’s governor.  Kate, at age 16, had recently returned from boarding school and became close to her father, serving as his official hostess at the governor’s mansion.  Kate also began serving as her father’s secretary and advisor and was able to meet many prominent political figures. In 1859, Kate failed to attend a reception for the wife of Illinois Senator Abraham Lincoln. Kate said of this occasion, â€Å"Mrs. Lincoln was piqued that I did not remain at Columbus to see her, and I have always felt that this was the chief reason why she did not like me at Washington.† Salmon Chase had a more momentous rivalry with Senator Lincoln, competing with him for the Republican nomination for president in 1860. Kate Chase accompanied her father to Chicago for the national Republican convention, where Lincoln prevailed. Kate Chase in Washington Although Salmon Chase had failed in his attempt to become president, Lincoln appointed him secretary of the treasury. Kate accompanied her father to Washington, D.C., where they moved into a rented mansion.  Kate held salons at the home from 1861 to 1863 and continued to serve as her father’s hostess and advisor. With her intellect, beauty, and expensive fashions, she was a central figure in Washington’s social scene. She was in direct competition with Mary Todd Lincoln. Mrs. Lincoln, as the White House hostess, had the position that Kate Chase coveted. The rivalry between the two was publicly noted. Kate Chase visited battle camps near Washington, D.C. and publicly criticized the president’s policies on the war. Suitors Kate had many suitors.  In 1862, she met newly elected Senator William Sprague from Rhode Island.  Sprague had inherited his family business in textile and locomotive manufacturing and was very wealthy. He had already been something of a hero in the early Civil War. He was elected Rhode Island’s governor in 1860 and in 1861, during his term in office, he enlisted in the Union Army. At the first Battle of Bull Run, he acquitted himself well. Wedding Kate Chase and William Sprague became engaged, though the relationship was stormy from the beginning. Sprague broke off the engagement briefly when he discovered Kate had had a romance with a married man. They reconciled and were married in an extravagant wedding at the Chase home on November 12, 1863. The press covered the ceremony.  A reported 500 to 600 guests attended and a crowd  also assembled outside the home. Sprague’s gift to his wife was a $50,000 tiara. President Lincoln and most of the cabinet attended. The press noted that the president arrived alone: Mary Todd Lincoln had snubbed Kate. Political Maneuvering Kate Chase Sprague and her new husband moved into her father’s mansion, and Kate continued to be the toast of the town and preside at social functions.  Salmon Chase bought land in suburban Washington, at Edgewood, and began to build his own mansion there. Kate helped advise and support her father’s 1864 attempt to be nominated over incumbent Abraham Lincoln by the Republican convention. William Sprague’s money helped support the campaign. Salmon Chase’s second attempt to become president also failed. Lincoln accepted his resignation as secretary of the treasury.  When Roger Taney died, Lincoln appointed Salmon P. Chase as chief justice of the Supreme Court. Early Marriage Troubles Kate and William Sprague’s first child and only son William was born in 1865.  By 1866, rumors that the marriage might end were quite public. William drank heavily, had open affairs, and was reported to be physically and verbally abusive to his wife. Kate, for her part, was extravagant with the family’s money. She spent lavishly on her father’s political career as well as fashion—even as she criticized Mary Todd Lincoln for her purported frivolous spending. 1868 Presidential Politics In 1868, Salmon P. Chase presided at the impeachment trial of President Andrew Johnson.  Chase already had his eye on the presidential nomination for later that year and Kate recognized that if Johnson was convicted, his successor would likely run as an incumbent, reducing Salmon Chase’s chances of nomination and election. Kate’s husband was among the senators voting on the impeachment. Like many Republicans, he voted for conviction, likely increasing tension between William and Kate.  Johnson’s conviction failed by one vote. Switching Parties Ulysses S. Grant won the Republican nomination for the presidency, and Salmon Chase decided to switch parties and run as a Democrat.  Kate accompanied her father to New York City, where the Tammany Hall convention did not select Salmon Chase. She blamed New York governor Samuel J. Tilden for engineering her father’s defeat. Historians deem it more likely that it was his support for voting rights for black men that led to Chases defeat.  Salmon Chase retired to his Edgewood mansion. Scandals and a Deteriorating Marriage Salmon Chase had become politically entangled with financier Jay Cooke, beginning with some special favors in 1862.  When criticized for accepting gifts as a public servant, Chase stated  that  a carriage from Cooke was actually a gift to his daughter. That same year, the Spragues built a massive mansion in Narragansett Pier, Rhode Island. Kate took many trips to Europe and New York City, spending heavily on furnishing the mansion. Her father wrote to her to caution her that she was being too extravagant with her husband’s money.  In 1869, Kate gave birth to her second child, this time a daughter named Ethel, though rumors of their deteriorating marriage increased. In 1872, Salmon Chase made yet another try for the presidential nomination, this time as a Republican.  He failed again and died the next year. More Scandals William Sprague’s finances suffered huge losses in the depression of 1873. After her father’s death, Kate began spending most of her time at her late fathers Edgewood mansion.  She also began an affair at some point with New York Senator Roscoe Conkling, with rumors spreading that her last two daughters were not her husband’s. After her father’s death, the affair became more and more public. With whispers of scandal, the men of Washington still attended many parties at Edgewood hosted by Kate Sprague. Their wives attended only if they had to. After William Sprague left the Senate in 1875, the attendance by the wives virtually ceased. In 1876, Kates paramour Senator Conkling was a key figure in the Senate’s deciding the presidential election in favor of Rutherford B. Hayes over Kate’s old enemy, Samuel J. Tilden. Tilden had won the popular vote. The Marriage Breaks Kate and William Sprague lived mostly separately, but in August of 1879, Kate and her daughters were at home in Rhode Island when William Sprague left on a business trip.  According to the sensational stories in the newspapers later, Sprague returned unexpectedly from his trip and found Kate with Conkling. Newspapers wrote that Sprague pursued Conkling into town with a shotgun, then imprisoned Kate and threatened to throw her out a second-floor window.  Kate and her daughters escaped with the help of servants and they returned to Edgewood. Divorce The next year, 1880, Kate filed for divorce. Pursuing a divorce was difficult for a woman under the laws of the time. She asked for custody of the four children and for the right to resume her maiden name, also unusual for the time. The case dragged on until 1882, when she won custody of their three daughters, with their son to remain with his father. She also won the right to be called Mrs. Kate Chase rather than using the name Sprague. Declining Fortune Kate took her three daughters to live in Europe in 1882 after the divorce was final. They lived there until 1886 when their money ran out, and she returned with her daughters to Edgewood. Chase began selling off the furniture and silver and mortgaging the home.  She was reduced to selling milk and eggs door to door to sustain herself.  In 1890, her son committed suicide at age 25, which caused Kate to become more reclusive. Her daughters Ethel and Portia moved out, Portia to Rhode Island and Ethel, who married, to Brooklyn, New York.  Kitty was mentally disabled and lived with her mother. In 1896, a group of admirers of Kate’s father paid the mortgage on Edgewood, allowing her some financial security.  Henry Villard, married to the daughter of abolitionist William Garrison, headed that effort. Death In 1899 after ignoring a serious illness for some time, Kate sought medical help for liver and kidney disease.  She died on July 31, 1899, of Bright’s disease, with her three daughters at her side. A U.S. government car brought her back to Columbus, Ohio, where she was buried next to her father.  Obituaries called her by her married name, Kate Chase Sprague. Legacy Despite her unhappy marriage and the devastation wrought on her reputation and clout by the scandal of her infidelity, Kate Chase Sprague is remembered as a remarkably brilliant and accomplished woman. As her fathers de facto campaign manager and as a central Washington society hostess, she wielded political power during the greatest crisis in United States history, the Civil War and its aftermath. Sources Goodwin, Doris Kearns. Team of Rivals: The Political Genius of Abraham Lincoln. Simon and Schuster, 2005.  Ishbel Ross. Proud Kate, Portrait of an Ambitious Woman. Harper, 1953.â€Å"Notable Visitors: Kate Chase Sprague (1840-1899).†Ã‚  Mr. Lincolns White House, www.mrlincolnswhitehouse.org/residents-visitors/notable-visitors/notable-visitors-kate-chase-sprague-1840-1899/.Oller, John. American Queen: The Rise and Fall of Kate Chase Sprague, Civil War â€Å"Belle of the North and Gilded Age Woman of Scandal. Da Capo Press, 2014